Feb 12 2024

The Exoplanet Radius Gap

Published by under Astronomy
Comments: 0

As of this writing, there are 5,573 confirmed exoplanets in 4,146 planetary systems. That is enough exoplanets, planets around stars other than our own sun, that we can do some statistics to describe what’s out there. One curious pattern that has emerged is a relative gap in the radii of exoplanets between 1.5 and 2.0 Earth radii. What is the significance, if any, of this gap?

First we have to consider if this is an artifact of our detection methods. The most common method astronomers use to detect exoplanets is the transit method – carefully observe a star over time precisely measuring its brightness. If a planet moves in front of the star, the brightness will dip, remain low while the planet transits, and then return to its baseline brightness. This produces a classic light curve that astronomers recognize as a planet orbiting that start in the plane of observation from the Earth. The first time such a dip is observed that is a suspected exoplanet, and if the same dip is seen again that confirms it. This also gives us the orbital period. This method is biased toward exoplanets with short periods, because they are easier to confirm. If an exoplanet has a period of 60 years, that would take 60 years to confirm, so we haven’t confirmed a lot of those.

There is also the wobble method. We can observe the path that a star takes through the sky. If that path wobbles in a regular pattern that is likely due to the gravitational tug from a large planet or other dark companion that is orbiting it. This method favors more massive planets closer to their parent star. Sometimes we can also directly observe exoplanets by blocking out their parent star and seeing the tiny bit of reflected light from the planet. This method favors large planets distant from their parent star. There are also a small number of exoplanets discovered through gravitational microlensing, and effect of general relativity.

None of these methods, however, explain the 1.5 to 2.0 radii gap. It’s also likely not a statistical fluke given the number of exoplanets we have discovered. Therefore it may be telling us something about planetary evolution. But there are lots of variables that determine the size of an exoplanet, so it can be difficult to pin down a single explanation.

Continue Reading »

Comments: 0

Feb 09 2024

JET Fusion Experiment Sets New Record

Don’t get excited. It’s always nice to see incremental progress being made with the various fusion experiments happening around the world, but we are still a long way off from commercial fusion power, and this experiment doesn’t really bring us any close, despite the headlines. Before I get into the “maths”, here is some quick background.

Fusion is the process of combining light elements into heavier elements. This is the process the fuels stars. We have been dreaming about a future powered by clean abundant fusion energy for at least 80 years. The problem is – it’s really hard. In order to get atoms to smash into each other with sufficient energy to fuse, you need high temperatures and pressures, like those at the core of our sun. We can’t replicate the density and pressure at a star’s core, so we have to compensate here on Earth with even higher temperatures.

There are a few basic fusion reactor designs. The tokamak design (like the JET rector) is a torus, with a plasma of hydrogen isotopes (usually deuterium and tritium) inside the torus contained by powerful magnetic fields. The plasma is heated and squeezed by brute magnetic force until fusion happens. Another method, the pinch method, also uses magnetic fields, but they use a stream of plasma that gets pinched at one point to high density and temperature. Then there is kinetic confinement which essentially uses an implosion created by powerful lasers to create a brief moment of high density and temperature. More recently a group has used sonic cavitation to create an instance of fusion (rather than sustained fusion). These methods are essentially in a race to create commercial fusion. It’s an exciting (if very slow motion) race.

Continue Reading »

Comments: 0

Feb 06 2024

Weaponized Pedantry and Reverse Gish Gallop

Have you ever been in a discussion where the person with whom you disagree dismisses your position because you got some tiny detail wrong or didn’t know the tiny detail? This is a common debating technique. For example, opponents of gun safety regulations will often use the relative ignorance of proponents regarding gun culture and technical details about guns to argue that they therefore don’t know what they are talking about and their position is invalid. But, at the same time, GMO opponents will often base their arguments on a misunderstanding of the science of genetics and genetic engineering.

Dismissing an argument because of an irrelevant detail is a form of informal logical fallacy. Someone can be mistaken about a detail while still being correct about a more general conclusion. You don’t have to understand the physics of the photoelectric effect to conclude that solar power is a useful form of green energy.

There are also some details that are not irrelevant, but may not change an ultimate conclusion. If someone thinks that industrial release of CO2 is driving climate change, but does not understand the scientific literature on climate sensitivity, that doesn’t make them wrong. But understanding climate sensitivity is important to the climate change debate, it just happens to align with what proponents of anthropogenic global warming are concluding. In this case you need to understand what climate sensitivity is, and what the science says about it, in order to understand and counter some common arguments deniers use to argue against the science of climate change.

What these few examples show is a general feature of the informal logical fallacies – they are context dependent. Just because you can frame someone’s position as a logical fallacy does not make their argument wrong (thinking this is the case is the fallacy fallacy). What logical fallacy is using details to dismissing the bigger picture? I have heard this referred to as a “Reverse Gish Gallop”. I’m don’t use this term because I don’t think it captures the essence of the fallacy. I have used the term “weaponized pedantry” before and I think that is better.

Continue Reading »

Comments: 0

Feb 05 2024

Did They Find Amelia Earhart’s Plane

Is this sonar image taken at 16,000 feet below the surface about 100 miles from Howland island, that of a downed Lockheed Model 10-E Electra plane? Tony Romeo hopes it is. He spent $9 million to purchase an underwater drone, the Hugan 6000, then hired a crew and scoured 5,200 square miles in a 100 day search hoping to find exactly that. He was looking, of course, for the lost plane of Amelia Earhart. Has he found it? Let’s explore how we answer that question.

First some quick background – most people know Amelia Earhart was a famous (and much beloved) early female pilot, the first female to cross the Atlantic solo. She was engaged in a mission to be the first solo pilot (with her navigator, Fred Noonan) to circumnavigate the globe. She started off in Oakland California flying east. She made it all the way to Papua New Guinea. From there her plan was to fly to Howland Island, then Honolulu, and back to Oakland. So she had three legs of her journey left. However, she never made it to Howland Island. This is a small island in the middle of the Pacific ocean and navigating to it is an extreme challenge. The last communication from Earhart was that she was running low on fuel.

That was the last anyone heard from her. The primary assumption has always been that she never found Howland Island, her plane ran out of fuel and crashed into the ocean. This happened in 1937.  But people love mysteries and there has been endless speculation about what may have happened to her. Did she go of course and arrive at the Marshall Islands 1000 miles away? Was she captured by the Japanese (remember, this was right before WWII)? Every now and then a tidbit of suggestive evidence crops up, but always evaporates on close inspection. It’s all just wishful thinking and anomaly hunting.

Continue Reading »

Comments: 0

Feb 02 2024

How To Prove Prevention Works

Homer: Not a bear in sight. The Bear Patrol must be working like a charm.
Lisa: That’s specious reasoning, Dad.
Homer: Thank you, dear.
Lisa: By your logic I could claim that this rock keeps tigers away.
Homer: Oh, how does it work?
Lisa: It doesn’t work.
Homer: Uh-huh.
Lisa: It’s just a stupid rock.
Homer: Uh-huh.
Lisa: But I don’t see any tigers around, do you?
[Homer thinks of this, then pulls out some money]
Homer: Lisa, I want to buy your rock.
[Lisa refuses at first, then takes the exchange]

 

This memorable exchange from The Simpsons is one of the reasons the fictional character, Lisa Simpson, is a bit of a skeptical icon. From time to time on the show she does a descent job of defending science and reason, even toting a copy of “Jr. Skeptic” magazine (which was fictional at the time then created as a companion to Skeptic magazine).

What the exchange highlights is that it can be difficult to demonstrate (let alone “prove”) that a preventive measure has worked. This is because we cannot know for sure what the alternate history or counterfactual would have been. If I take a measure to prevent contracting COVID and then I don’t get COVID, did the measure work, or was I not going to get COVID anyway? Historically the time this happened on a big scale was Y2K – this was a computer glitch set to go off when the year changed to 2000. Most computer code only encoded the year as two digits, assuming the first two digits were 19, so 1995 was encoded as 95. So when the year changed to 2000, computers around the world would think it was 1900 and chaos would ensue. Between $300 billion and $500 billion were spent world wide to fix this bug by upgrading millions of lines of code to a four digit year stamp.

Did it work? Well, the predicted disasters did not happen, so from that perspective it did. But we can’t know for sure what would have happened if we did not fix the code. This has lead to speculation and even criticism about wasting all that time and money fixing a non-problem. There is good reason to think that the preventive measures worked, however.

At the other end of the spectrum, often doomsday cults, predicting that the world will end in some way on a specific date, have to deal with the day after. One strategy is to say that the faith of the group prevented doomsday (the tiger-rock strategy). They can now celebrate and start recruiting to prevent the next doomsday.

Continue Reading »

Comments: 0

Feb 01 2024

Some Future Tech Possibilities

It’s difficult to pick winners and losers in the future tech game. In reality you just have to see what happens when you try out a new technology in the real world with actual people. Many technologies that look good on paper run into logistical problems, difficulty scaling, fall victim to economics, or discover that people just don’t like using the tech. Meanwhile, surprises hits become indispensable or can transform the way we live our lives.

Here are a few technologies from recent news that may or may not be part of our future.

Recharging Roads

Imaging recharging your electric vehicle wirelessly just by driving over a road. Sounds great, but is it practical and scalable? Detroit is running an experiment to help find out. On a 400 meter stretch of downtown road they installed inducting cables under the ground and connected them to the city grid. EVs that have the $1,000 device attached to their battery can charge up while driving over this stretch of road.

The technology itself is proven, and is already common for recharging smartphones. It’s inductive charging, using a magnetic field to induce a current which recharges a battery. Is this a practical approach to range anxiety? Right now this technology costs $2 million per mile. Having any significant infrastructure of these roads would be incredibly costly, and it’s not clear the benefit is worth it. How much are they going to charge the EV? What is the efficiency? Will drivers fork out $1000 for minimal benefit?

Continue Reading »

Comments: 0

Jan 30 2024

Neuralink Implants Chip in Human

Elon Musk has announced that his company, Neuralink, has implanted their first wireless computer chip into a human. The chip, which they plan on calling Telepathy (not sure how I feel about that) connects with 64 thin hair-like electrodes, is battery powered and can be recharged remotely. This is exciting news, but of course needs to be put into context. First, let’s get the Musk thing out of the way.

Because this is Elon Musk the achievement gets more attention than it probably deserves, but also more criticism. It gets wrapped up in the Musk debate – is he a genuine innovator, or just an exploiter and showman? I think the truth is a little bit of both. Yes, the technologies he is famous for advancing (EVs, reusable rockets, digging tunnels, and now brain-machine interface) all existed before him (at least potentially) and were advancing without him. But he did more than just gobble up existing companies or people and slap his brand on it (as his harshest critics claim). Especially with Tesla and SpaceX, he invested his own fortune and provided a specific vision which pushed these companies through to successful products, and very likely advanced their respective industries considerably.

What about Neuralink and BMI (brain-machine interface) technology? I think Musk’s impact in this industry is much less than with EVs and reusable rockets. But he is increasing the profile of the industry, providing funding for research and development, and perhaps increasing the competition. In the end I think Neuralink will have a more modest, but perhaps not negligible, impact on bringing BMI applications to the world. I think it will end up being a net positive, and anything that accelerates this technology is a good thing.

Continue Reading »

Comments: 0

Jan 29 2024

Controlling the Narrative with AI

There is an ongoing battle in our society to control the narrative, to influence the flow of information, and thereby move the needle on what people think and how they behave. This is nothing new, but the mechanisms for controlling the narrative are evolving as our communication technology evolves. The latest addition to this technology is the large language model AIs.

“The media”, of course, has been a large focus of this competition. On the right there is constant complaints of the “liberal bias” in the media, and on the left there are complaints of the rise of right-wing media which they feel is biased and radicalizing. The culture wars focus mainly on schools, because those schools teach not only facts and knowledge but convey the values of our society. The left views DEI (diversity, equity, and inclusion) initiates as promoting social justice while the right views it as brainwashing the next generation with liberal propaganda. This is an oversimplification, but it is the basic dynamic. Even industry has been targeted by the culture wars – which narratives are specific companies supporting? Is Disney pro-gay? Which companies fly BLM or LGBTQ flags?

But increasingly “the narrative” (the overall cultural conversation) is not being controlled by the media, educational system, or marketing campaigns. It’s being controlled by social media. This is why, when the power of social media started to become apparent, many people panicked. Suddenly it seemed we had seeded control of the narrative to a few tech companies, who had apparently decided that destroying democracy was a price they were prepared to pay for maximizing their clicks. We now live in a world where YouTube algorithms can destroy lives and relationships.

Continue Reading »

Comments: 0

Jan 26 2024

How Humans Can Adapt to Space

My recent article on settling Mars has generated a lot of discussion, some of it around the basic concept of how difficult it is for humans to live anywhere but a thin envelope of air hugging the surface of the Earth. This is undoubtedly true, as I have discussed before – we evolved to be finely adapted to Earth. We are only comfortable in a fairly narrow range of temperature. We need a fairly high percentage of oxygen (Earth’s is 21%) at sufficient pressure, and our atmosphere can’t have too much of other gases that might cause us problems. We are protected from most radiation that bathes the universe. Our skin and eyes have adapted to the light of our sun, both in frequency and intensity. And we are adapted to Earth’s surface gravity, with any significantly more or less causing problems for our biology.

Space itself is an extremely unforgiving environment requiring a total human habitat, with the main current technological challenges being artificial gravity and radiation protection. But even on other worlds it is extremely unlikely that all of the variables will be within the range of human survival, let alone comfort and thriving. Mars, for example, has too thin an atmosphere with no oxygen, no magnetic field to protect from radiation, it’s too cold and its surface gravity is too little. It’s better than the cold vacuum of space, but not by much. You still need essentially a total habitat, and we will probably have to go underground for radiation protection. Gravity is 38% that of Earths, which is probably not ideal for human biology. In space, with microgravity, at least you can theoretically use rotation to simulate gravity.

In addition to adapting off-Earth environments to humans, is it feasible to adapt humans to other environments? Let me start with some far-future options then finish with what is likely to be the nearest-future options.

Continue Reading »

Comments: 0

Jan 25 2024

DNA Directed Assembly of Nanomaterials

Arguably the type of advance that has the greatest impact on technology is material science. Technology can advance by doing more with the materials we have, but new materials can change the game entirely. It is no coincidence that we mark different technological ages by the dominant material used, such as the bronze age and iron age. But how do we invent new materials?

Historically new materials were mostly discovered, not invented. Or we discovered techniques that allowed us to use new materials. Metallurgy, for example, was largely about creating a fire hot enough to smelt different metals. Sometimes we literally discovered new elements, like aluminum or tungsten, with desirable properties. We also figured out how to make alloys, combining different elements to create a new material with unique or improved properties. Adding tin to copper made a much stronger and more durable metal, bronze. While the hunt for new usable elements is basically over, there are so many possible combinations that researching new alloys is still a viable way to find new materials. In fact a recent class of materials known as “superalloys” have incredible properties, such as extreme heat resistance.

If there are no new elements (other than really big and therefore unstable artificial elements), and we already have a mature science of making alloys, what’s next? There are also chemically based materials, such as polymers, resins, and composites, that can have excellent properties, including the ability to be manufactured easily. Plastics clearly had a dramatic effect on our technology, and some of the strongest and lightest materials we have are carbon composites. But again it feels like we have already picked the low-hanging fruit here. We still need new better materials.

Continue Reading »

Comments: 0

« Prev - Next »