Jan
30
2025
Everything, apparently, has a second life on TikTok. At least this keeps us skeptics busy – we have to redebunk everything we have debunked over the last century because it is popping up again on social media, confusing and misinforming another generation. This video is a great example – a short video discussing the “incorruptibility’ of St. Teresa of Avila. This is mainly a Catholic thing (but also the Eastern Orthodox Church) – the notion that the bodies of saints do not decompose, but remain in a pristine state after death, by divine intervention. This is considered a miracle, and for a time was a criterion for sainthood.
The video features Carlos Eire, a Yale professor of history focusing on medieval religious history. You may notice that the video does not include any shots of the actual body of St. Teresa. I could not find any online. Her body is not on display like some incorruptibles, but has been exhumed in 1914 and again recently. So we only have the reports of the examiners. This is where much of the confusion is generated – the church defines incorruptible very differently than the believers who then misrepresent the actual evidence. Essentially, if the soft tissues are preserved in any way (so the corpse has not completely skeletonized) and remains somewhat flexible, that’s good enough.
The case of Teresa is typical – one of the recent examiners said, “There is no color, there is no skin color, because the skin is mummified, but you can see it, especially the middle of the face.” So the body is mummified and you can only partly make out the face. That is probably not what most believers imagine when the think of miraculous incorruptibility.
Continue Reading »
Jan
28
2025
On January 20th a Chinese tech company released the free version of their chatbot called DeepSeek. The AI chatbot, by all accounts, is about on par with existing widely available chatbots, like ChatGPT. It does not represent any new abilities or breakthrough in quality. And yet the release shocked the industry causing the tech-heavy stock market Nasdaq to fall 3%. Let’s review why that is, and then I will give some thoughts on what this means for AI in general.
What was apparently innovative about DeepSeek is that, the company claims, it was trained for only $8 million. Meanwhile ChatGPT 4 training cost over $100. The AI tech industry is of the belief that further advances in LLMs (large language models – a type of AI) requires greater investments, with ChatGPT-5 estimated to cost over a billion dollars. Being able to accomplish similar results at a fraction of the cost is a big deal. It may also mean that existing AI companies are overvalued (which is why their stocks tumbled).
Further, the company that made DeepSeek used mainly lower power graphics chips. Apparently they did have a horde of high end chips (the export of which are banned to China) but was able to combine them with more basic graphics chips to create DeepSeek. Again, this is what is disruptive – they are able to get similar results with lower cost components and cheaper training. Finally, this innovation represents a change for the balance of AI tech between the US and China. Up until now China has mainly been following the US, copying its technology and trailing by a couple of years. But now a Chinese company has innovated something new, not just copied US technology. This is what has China hawks freaking out. (Mr. President, we cannot allow an AI gap!)
Continue Reading »
Jan
20
2025

There really is a significant mystery in the world of cosmology. This, in my opinion, is a good thing. Such mysteries point in the direction of new physics, or at least a new understanding of the universe. Resolving this mystery – called the Hubble Tension – is a major goal of cosmology. This is a scientific cliffhanger, one which will unfortunately take years or even decades to sort out. Recent studies have now made the Hubble Tension even more dramatic.
The Hubble Tension refers to discrepancies in measuring the rate of expansion of the universe using different models or techniques. We have known since 1929 that the universe is not static, but it is expanding. This was the famous discovery of Edwin Hubble who notice
d that galaxies further from Earth have a greater red-shift, meaning they are moving away from us faster. This can only be explained as an expanding universe – everything (not gravitationally bound) is moving away from everything else. This became known as Hubble’s Law, and the rate of expansion as the Hubble Constant.
Then in 1998 two teams, the Supernova Cosmology Project and the High-Z Supernova Search Team, analyzing data from Type 1a supernovae, found that the expansion rate of the universe is actually accelerating – it is faster now than in the distant past. This discovery won the Nobel Prize in physics in 2011 for Adam Riess, Saul Perlmutter, and Brian Schmidt. The problem remains, however, that we have no idea what is causing this acceleration, or even any theory about what might have the necessary properties to cause it. This mysterious force was called “dark energy”, and instantly became the dominant form of mass-energy in the universe, making up 68-70% of the universe.
I have seen the Hubble Tension framed in two ways – it is a disconnect between our models of cosmology (what they predict) and measurements of the rate of expansion, or it is a disagreement between different methods of measuring that expansion rate. The two main methods of measuring the expansion rate are using Type 1a supernovae and by measuring the cosmic background radiation. Type 1a supernovae are considered standard candle because they have roughly the same absolute magnitude (brightness). The are white dwarf stars in a binary system that are siphoning off mass from their partner. When they reach a critical point of mass, they go supernova. So every Type 1a goes supernova with the same mass, and therefore the same brightness. If we know an object’s absolute magnitude of brightness, then we can calculate its distance. It was this data that lead to the discovery that the universe is accelerating.
Continue Reading »
Jan
13
2025
My recent article on social media has fostered good social media engagement, so I thought I would follow up with a discussion of the most urgent question regarding social media – should the US ban TikTok? The Biden administration signs into law legislation that would ban the social media app TikTok on January 19th (deliberately the day before Trump takes office) unless it is sold off to a company that is not, as it is believed, beholden to the Chinese government. The law states it must be divested from ByteDance, which is the Chinese parent company who owns TikTok. This raises a few questions – is this constitutional, are the reasons for it legitimate, how will it work, and will it work?
A federal appeals court ruled that the ban is constitutional and can take place, and that decision is now before the Supreme Court. We will know soon how they rule, but indicators are they are leaning towards allowing the law to take effect. Trump, who previously tried to ban TikTok himself, now supports allowing the app and his lawyers have argued that he should be allowed to solve the issue. He apparently does not have any compelling legal argument for this. In any case, we will hear the Supreme Court’s decision soon.
If the ban is allowed to take place, how will it work? First, if you are not aware, TikTok is a short form video sharing app. I have been using it extensively over the past couple of years, along with most of the other popular platforms, to share skeptical videos and have had good engagement. Apparently TikTok is popular because it has a good algorithm that people like. TikTok is already banned on devices owned by Federal employees. The new ban will force app stores in the US to remove the TikTok app and now allow any further updates or support. Existing TikTok users will continue to be able to use their existing apps, but they will not be able to get updates so they will eventually become unusable.
Continue Reading »
Jan
10
2025
One of the things I have come to understand from following technology news for decades is that perhaps the most important breakthroughs, and often the least appreciated, are those in material science. We can get better at engineering and making stuff out of the materials we have, but new materials with superior properties change the game. They make new stuff possible and feasible. There are many futuristic technologies that are simply not possible, just waiting on the back burning for enough breakthroughs in material science to make them feasible. Recently, for example, I wrote about fusion reactors. Is the addition of high temperature superconducting material sufficient to get us over the finish line of commercial fusion, or are more material breakthroughs required?
One area where material properties are becoming a limiting factor is electronics, and specifically computer technology. As we make smaller and smaller computer chips, we are running into the limits of materials like copper to efficiently conduct electrons. Further advance is therefore not just about better technology, but better materials. Also, the potential gain is not just about making computers smaller. It is also about making them more energy efficient by reducing losses to heat when processors work. Efficiency is arguably now a more important factor, as we are straining our energy grids with new data centers to run all those AI and cryptocurrency programs.
This is why a new study detailing a new nanoconducting material is actually more exciting than it might at first sound. Here is the editor’s summary:
Noncrystalline semimetal niobium phosphide has greater surface conductance as nanometer-scale films than the bulk material and could enable applications in nanoscale electronics. Khan et al. grew noncrystalline thin films of niobium phosphide—a material that is a topological semimetal as a crystalline material—as nanocrystals in an amorphous matrix. For films with 1.5-nanometer thickness, this material was more than twice as conductive as copper. —Phil Szuromi
Continue Reading »
Jan
09
2025
Recently Meta decided to end their fact-checkers on Facebook and Instagram. The move has been both hailed and criticized. They are replacing the fact-checkers with an X-style “community notes”. Mark Zuckerberg summed up the move this way: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
That is the essential tradeoff- whether you think false positives are more of a problem or false negatives. Are you concerned more with enabling free speech or minimizing hate speech and misinformation? Obviously both are important, and an ideal platform would maximize both freedom and content quality. It is becoming increasingly apparent that it matters. The major social media platforms are not mere vanity projects, they are increasingly the main source of news and information, and foster ideological communities. They affect the functioning of our democracy.
Let’s at least be clear about the choice that “we” are making (meaning that Zuckerberg is making for us). Maximal freedom without even basic fact-checking will significantly increase the amount of misinformation and disinformation on these platforms, as well as hate-speech. Community notes is a mostly impotent method of dealing with this. Essentially this leads to crowd-sourcing our collective perception of reality.
Continue Reading »
Jan
06
2025
How close are we to having fusion reactors actually sending electric power to the grid? This is a huge and complicated question, and one with massive implications for our civilization. I think we are still at the point where we cannot count on fusion reactors coming online anytime soon, but progress has been steady and in some ways we are getting tatalizingly close.
One company, Commonwealth Fusion Systems, claims it will have completed a fusion reactor capable of producing net energy by “the early 2030’s”. A working grid-scale fusion reactor within 10 years seems really optimistic, but there are reasons not to dismiss this claim entirely out of hand. After doing a deep dive my take is that the 2040’s or even 2050’s is a safer bet, but this may be the fusion design that crosses the finish line.
Let’s first give the background and reasons for optimism. I have written about fusion many times over the years. The basic idea is to fuse lighter elements into heavier elements, which is what fuels stars, in order to release excess energy. This process releases a lot of energy, much more than fission or any chemical process. In terms of just the physics, the best elements to fuse are one deuterium atom to one tritium atom, but deuterium to deuterium is also feasible. Other fusion elements are simply way outside our technological capability and so are not reasonable candidates.
Continue Reading »