Oct 21 2022

More Precise Measure of Hubble Constant Solidifies Mystery

Published by under Astronomy
Comments: 0

Cosmologists have recently published the updated results of an extensive analysis of the overall structure of the cosmos, with interesting results. It both solidifies our current understanding of the universe, but also reinforces a conflict that scientists have not been able to solve.

The story begins with Type IA supernova. A supernova is when a star explodes because of runaway fusion in its core. Different stars of different masses and compositions will explode with different energies and therefore intrinsic brightness. But a Type IA is caused by a white dwarf star in a binary system which is leaching matter off its companion. The white dwarf slowly gains mass until it reaching the Chandrasehkar limit, the point at which its gravity overcomes the outward pressure of heat and energy, the star then collapses and goes supernova. This means that all Type IA supernova are the exact same mass when they explode, which further means that they should have the same intrinsic brightness. In reality there is some variability in peak brightness based on other variables like composition, but astronomers have learned to make adjustments so as to arrive at a precise measure of intrinsic brightness.

Knowing the intrinsic brightness of an astronomical object is hugely useful. It means we can calculate based on its apparent brightness exactly how far away it is. Such objects are known as standard candles, and the Type IA supernova are perhaps the most useful we have. Type IAs are also really bright, outshining entire galaxies, which further means we can see them really far away, about 10 billion light years. There is also a lot of them happening around the universe. Looking far away is also looking back in time, so Type IAs not only allow us to measure the cosmos, but also to measure it throughout its history (back to 10 billion years).

It was observations of Type IA supernova that allowed astronomers to first determine that the universe is not only expanding, it’s accelerating. Since then astronomers have been gathering data on Type IAs in a project called Pantheon. The catalogued more than 1,000 Type IAs providing the most precise measure of the rate of the universe’s expansion, called the Hubble Constant. Now they have published what they are calling Pantheon+, with an expanded database of over 1,500 Type IA supernova. They have also been able to tweak their methods to make more precise measurements, account for more factors, and essentially give a much more detailed account of the Hubble Constant at different times throughout the history of the universe.

The big picture is that the matter/energy of the universe is comprised of 66.2% dark energy (the energy that is pushing the universe apart), and 33.8% matter, most of which is dark matter (about 85%). Regular matter accounts for only about 5% of the total mass-energy of the universe. The good news is that the Pantheon+ data reinforces our standard cosmological model and increasing the precision of the measurements of Hubble’s Constant, and also the confidence in that number (beyond the 5 sigma level – the threshold physicists use to eliminate statistical flukes). However, this result also locks into place a conflict between two ways of measuring the Hubble Constant, and conflict called the Hubble Tension.

Using the Pantheon+ data of Type IA supernova is considered a direct measure of the Hubble Constant, and is “model independent”. This means that the number does not depend on any particular model about the composition, origin, and shape of the universe. We don’t have to make any other assumptions – we just directly measure the Hubble Constant. We can also, however, infer what the Hubble Constant should be based on a specific model of the universe. The current standard model of cosmology is called the ΛCDM model – Lambda-Cold Dark Matter. This model is based on various assumptions – the universe is made of dark energy, dark matter, and regular matter, the universe started with a Big Bang of pure energy, and that General Relativity is correct. The ΛCDM model has done a good job of accounting for things like the cosmic background radiation of the universe, the large scale structure of the universe, and the composition of matter in the universe. 

However, the ΛCDM model results in a calculation of the Hubble Constant of around 67 km s–1 Mpc–1 , while the Pantheon+ measure is about 73 km s–1 Mpc–1. Any hope that more precise and accurate measures would close the discrepancy were dashed by Pantheon+. We appear to be stuck with it. What does this mean? Apparently cosmologists are split 50/50 on the answer. Some say we need to tweak the ΛCDM model. Perhaps at large scales General Relativity is slightly different, for example. Others argue that we need to abandon the ΛCDM model. The problem with this approach is that there are no other competing models. 

This is an interesting problem in cosmology. It’s really an opportunity. Scientists love anomalies and mysteries because they point in the direction of new science. Obviously we are missing something about the universe, and we need to discover what that is to bring these two measures of the Hubble Constant into alignment. This happens frequently in science, and is partly how science grinds forward. For example, when astronomers first measured neutrinos from the sun we only detected about a third of what was predicted by models of nuclear fusion (deemed the solar neutrino problem). Eventually it was discovered that neutrinos can flip their type as they are streaming through space, and we eventually detected the other types and everything came into alignment. Problem solved.

In the meantime, of course, cranks and pseudoscientists use the anomaly to declare that everything we know is wrong, and then to insert an alternate crank model of reality. In this case some creationists argued that the sun is not billions of years old and powered by fusion, but rather is only thousands of years old and powered by gravity. Of course, that would not explain where the neutrinos we were detecting were coming from, creating an even bigger solar neutrino problem, but that didn’t stop them. It’s also worth pointing out that scientists predicted there would be a solution to the solar neutrino problem, and creationists predicted there would not be and the solar fusion model would have to be abandoned. Successfully making predictions is the best indicator of utility in science.

The same is likely to happen here. Meanwhile scientists are carrying on with actual science. They are actively trying to break their own models and theories, testing them in meaningful ways, acknowledging problems with their own theories, and then exploring ways to fix them and looking for evidence to see which solutions are correct. That’s because scientists are interested in figuring out reality, not just supporting their pre-existing ideas. Pantheon+ is a good example, they gathered lots of data and let the chips fall where they may, even if that causes a conflict they have to resolve.

This is definitely a scientific problem worth following. I can’t wait to see what the answer is.

No responses yet