Oct 26 2021

Possible Extragalactic Planet

Published by under Astronomy
Comments: 0

For most of recorded human history we knew of only those planets that were naked-eye visible (Mercury, Venus, Mars, Jupiter and Saturn). We new these dots of light in the sky were different from the other stars because they were not fixed, they wandered about. The invention of the telescope and its use in astronomy allowed us to study the planets and see that they were worlds of their own, while adding Uranus, Neptune and Pluto to the list. Pluto has since been recategorized as a dwarf planet, with four others added to the list, and many more likely.

Of course astronomers suspected that our own solar system was not unique and therefore other stars likely had their own planets. The first observation of a disc of material around another star was in 1984, supporting the idea of planet formation around other stars. The first exoplanets (as planets outside our solar system are called) was discovered in 1992, orbiting a pulsar. In 1995 the first planet orbiting a main sequence star (like our own) was discovered. This confirmed the possibility of planetary systems theoretically capable of supporting organic life. So far more than 4,000 exoplanets have been confirmed.

The most common method of detecting an exoplanet is the transit method. For systems where the plane of planetary orbits aligns with the angle of view from Earth, orbiting planets will pass in front of their parent star. Astronomers measure what they call the “light curve” of the star, and when a planet transits the light curve dips then returns to baseline. From this they can infer the size and distance of the planet. If the orbit is short enough, they can confirm the exoplanet and measure its year by detecting multiple transits.

Continue Reading »

Comments: 0

Oct 25 2021

A Synergistic Hydrogen Economy

I have been writing a lot about energy recently, partly because there is a lot of energy news, and also I find it helpful to focus on a topic for a while to help me see how all the various pieces fit together. It is often easy to see how individual components of energy and other technology might operate in isolation, and more challenging to see how they will all work together as part of a complete system. But the system approach is critical. We can calculate the carbon and energy efficiency of installing solar panels on a home and easily understand how they would function. A completely different analysis is required to imagine solar power supplying 50% or more of our energy – now we have to consider things like grid function, grid storage, energy balancing, regulations for consumers and industry, sourcing raw material, disposal and recycling.

This is why just letting the market sort everything out will likely not result in optimal outcomes. Market decisions tend to be individual and short term. When EVs and solar panels are cost effective, everyone will want one. Demand is likely to outstrip supply. Supply chains could bottleneck. The grid won’t support rooftop solar beyond the early adopters. And where is everyone going to charge their EVs? At scale, widespread change in technology often requires new infrastructure and sometimes systems planning.

This can create an infrastructure dilemma – what future technology to you build for? You can take the “build it and they will come” approach, which assumes that infrastructure investment will affect and even determine the future direction of the market. California discovered the limits of this approach when they tried to bootstrap a hydrogen vehicle revolution by building a hydrogen infrastructure. Or you can backfill infrastructure as technology requires it, but this doesn’t quite work either. People won’t buy cars until there are roads, and won’t want to invest in roads until lots of people have cars. At some point you have to bet on future technology – just be flexible and willing to change course as technology evolves.

Continue Reading »

Comments: 0

Oct 21 2021

Drivers of Military Technology

There are different ways of looking at history. The traditional way, the one most of us were likely taught in school, is mostly as a sequence of events focusing on the state level – world leaders, their political battles, and their wars with each other. This focus, however, can be shifted in many ways. It can be shifted horizontally to focus on different aspects of history, such as cultural or scientific. It can also zoom in or out to different levels of detail. I find especially fascinating those takes on history that zoom all the way out, take the biggest perspective possible and look for general trends.

A recent study does just that, looking at societal factors that drive the development of military technology in pre-industrial societies, covering a span of 10,000 years. They chose military technology for two main reasons. The first is simply convenience – military technology is particularly well preserved in historical records. The second is that military technology is a good marker for overall technology in most societies, and tends to drive other technologies. This remains true today, as cutting edge military technology (think GPS) often trickles down to the civilian world.

As an interesting aside, the researchers relied on Seshat: Global History Databank. This is a massive databank of 200,000 entries on 500 societies over 10,000 years. This kind of data is necessary to do this level of research efficiently, and is a good demonstration of this more general trend in scientific research – in increasing areas of research, it’s all about big data.

Continue Reading »

Comments: 0

Oct 19 2021

Net Zero by 2050

Next month the UN will host the 26th conference on climate change, the COP26. At this point the discussion is not so much what the goal should be, it’s how to achieve that goal. The Paris Accords set that goal at limiting global warming to 1.5 C above pre-industrial levels. In order to achieve this outcome goal the consensus is that we need to achieve the primary goal of net zero green house gas (GHG) emissions by 2050. There is considerable disagreement about whether or not net zero by 2050 will achieve the outcome goal of limiting warming to 1.5 C. Some experts think it’s already too late for 1.5 C, and we should be planning on at least 2.0 C and what the world will be like with that much warming.

Either way, there is agreement that we should focus instead on what we can actually do, achieve net zero by 2050, rather than the outcome which we cannot predict with that level of precision. If we agree on this goal, the conversation then shifts to the question of how we can achieve this goal. From one perspective the answer is easy – we need to stop burning fossil fuels, convert those industries with the greatest carbon footprint to produce less CO2, and add some carbon capture to compensate for whatever CO2 emissions we cannot fully get rid of. But that’s like saying – in order to win football games you need to score more points than the other team, mostly through touchdowns and field goals. That’s correct, but doesn’t really give you the information you need. How, exactly, will we achieve these ends?

This is the conversation we should have been focusing on for the last 20 years, rather than dealing with denialists who refuse to accept the scientific reality, or the delay tactics by industry and their paid representatives (i.e. politicians). It’s pretty clear at this point we are never going to convince the deniers (not in this political climate – and I’m sure the comments to this post will adequately demonstrate my point), and industry is going to run out the clock any way they can. The bottom line is that achieving net zero by 2050 will require leaving fossil fuels in the ground, unburned. For the fossil fuel industry this means leaving a lot of money on the table, which is not going to happen simply out of a desire to be good corporate citizens.

Continue Reading »

Comments: 0

Oct 18 2021

The Source of Elements

Published by under Astronomy
Comments: 0

Perhaps the most famous line from Carl Sagan’s Cosmos series is, “We’re made of star stuff”. In this statement Sagan was referring to the fact that most of the elements that make up people (and everything else) were created (through nuclear fusion) inside long dead stars. While this core claims is true, physicists are finding potential supplemental sources of heavy elements, including in some surprising locations.

Elements are defined entirely by their number of protons, with hydrogen being the lightest element at one proton. The isotope of an element is determined by the number of neutrons, which can vary. The number of electrons are determined by the number of protons. An atom with greater or fewer electrons than protons is an ion, because it carries an electric charge. The next lightest element is helium, at two protons, and then lithium, at three protons. Current theories, observations, and models suggest that these three elements were the only ones created in the Big Bang, in a process known as nucleosynthesis.

It actually took about 1 second after the Big Bang for the universe to cool enough for protons and neutrons to exist, and then for the next three minutes or so all the elemental nuclei formed – about 75% hydrogen, 25% helium, and trace lithium (by mass). It then took 380,000 years for the universe to cool enough for these nuclei to capture electrons. Where, then, did the elements heavier than lithium come from? This is where Sagan’s “star stuff” comes in.

Stars are fueled by nuclear fusion in their cores, the process of combining lighter elements into heavier elements. The more massive the star, the more heat and pressure they can generate in their core, and the heavier the elements they can fuse. Fusion of these elements is exothermic, it produces the energy necessary to sustain the fusion. This lasts until enough of the heavier fusion product builds up in the core to effectively stop fusion. Then the star will collapse further, becoming hotter and denser until it can start fusing the heavier element. This process continues until the star is no longer able to fuse the elements in its core because it is simply not massive enough, there is therefore nothing to stop its collapse and it becomes a white dwarf.

Continue Reading »

Comments: 0

Oct 15 2021

Superionic Ice and Magnetic Fields

Published by under Astronomy
Comments: 0

Some planets have planetary magnetic fields, while others don’t. Mercury has a weak magnetic field, while Venus and Mars have no significant magnetic field. This was bad news for Mars (or any critters living on Mars in the past) because the lack of a significant magnetic field allowed the solar wind to slowly strip away most of its atmosphere. Life on Earth enjoys the protection of a strong planetary magnetic field, protecting us from solar radiation.

The Earth’s magnetic field is created by molten iron in the outer core. Rotating electrical charges generate magnetic fields, and iron is a conductive material. This is called the dynamo theory, with the vast momentum of the spinning iron core translating some of its energy into generating a magnetic field. In fact, the molten outer core is rotating a little faster than the rest of the Earth. This phenomenon is also likely the source of Mercury’s weak magnetic field.

The two gas giants have magnetic fields, with Jupiter having the strongest field of any planet (the sun has the strongest magnetic field in the solar system). It’s largest moon, Ganymede, also has a weak magnetic field, making it the only moon in our solar system to have one. Jupiter’s magnetic field is 20,000 times stronger than Earth’s. It’s massive and powerful. The question is – what is generating the magnetic field inside Jupiter? It’s probably not a molten iron core, like on Earth. Based on Jupiter’s mass and other features, astronomers suspect that the magnetic field is generated by liquid hydrogen in its core. Under extreme pressure, even at high temperatures, hydrogen can become a metallic liquid, capable of carrying a charge, and therefore generating a magnetic field. This is likely also the source of Saturn’s magnetic field, although it’s field is slightly weaker than Earth’s.

Continue Reading »

Comments: 0

Oct 14 2021

Lack of Infrastructure Killed Early Electric Car

At the turn of the 19th century there were three relatively equal contenders for automobile technology, electric cars, steam powered, and the internal combustion engine (ICE). It was not obvious at the time which technology would emerge dominant, or even if they would all continue to have market share. By 1905, however, the ICE began to dominate, and by 1920 electric cars fell out of production. The last steam car company ended production in 1930, perhaps later than you might have guessed.

This provides an excellent historical case for debate over which factors ultimately determined the winner of this marketplace competition (right up there with VHS vs Betamax). We will never definitively know the answer – we can’t rerun history with different variables to see what happens. Also, the ICE won out the world over because the international industry consolidated around that choice, meaning that other countries were not truly independent experiments.

The debate comes down to internal vs external factors – the inherent attributes of each technology vs infrastructure. Each technology had its advantages and disadvantages. Steam engines worked just fine, and had the advantage of being flexible in terms of fuel. These were external combustion engines, as the combustion took place separately, outside the engine itself. But they also needed a boiler, which produced the steam to power the engine. Steam cars were more powerful than ICE cars, and also quieter and (depending on their configuration) produced less pollution. They had better torque characteristics, obviating the need for a transmission. The big disadvantage was that they needed water for the boiler, which required either a condenser or frequent topping off. They could also take a few minutes to get up to operating temperature, but this problem was solved in later models with a flash boiler.

Continue Reading »

Comments: 0

Oct 12 2021

Making Proteins with Plant Molecular Farming

As the world is contemplating ways to make its food production systems more efficient, productive, sustainable, and environmentally friendly, biotechnology is probably our best tool. I won’t argue it’s our only tool – there are many aspects of agriculture and they should all be leveraged to achieve our goals. I simply don’t think that we should take any tools off the table because of misguided philosophy, or worse, marketing narratives. The most pernicious such philosophy is the appeal to nature fallacy, where some arbitrary and vague sense of what is “natural” is used to argue (without or even against the evidence) that some options are better than others. We don’t really have this luxury anymore. We need to follow the science.

Essentially we should not fear genetic technology. Genetically modified and gene edited crops have proven to be entirely safe and can offer significant advantages in our quest for better agriculture. The technology has also proven useful in medicine and industry through the use of genetically modified microorganisms, like bacteria and yeast, for industrial scale production of certain proteins. Insulin is a great example, and is essential to modern treatment of diabetes. The cheese industry is mostly dependent on enzymes created with GMO organisms.

This, by the way, is often the “dirty little secret” of many legislative GMO initiatives. They usually include carve out exceptions for critical GMO applications. In Hawaii, perhaps the most anti-GMO state, their regulations exclude GMO papayas, because they saved the papaya industry from blight, and Hawaii apparently is not so dedicated to their anti-GMO bias that they would be willing to kill off a vital industry. Vermont passed the most aggressive GMO labeling law in the States, but made an exception for the cheese industry. These exceptions are good, but they show the hypocrisy in the anti-GMO crowd – “GMO’s are bad (except when we can’t live without them)”.

Continue Reading »

Comments: 0

Oct 11 2021

Neurofeedback Headbands for Stress Reduction

A recent BBC article discusses the emergence of products designed for neurofeedback to aid in stress reduction. The headline asks, “Smart headbands claim to make people calmer. Do they work?” However, the article does not really answer the question, or even get to the heart of the issue. It mostly provide anecdotes and opinions without putting the technology into a clear context. The article focuses mainly on the use of such devices to allegedly improve sports performance.

There are a few premises on which the claims made for such devices are based, varying from well established to questionable. One premise is that we can measure “stress” in the brain using an electroencephalograph (EEG) to measure the electrical activity in the brain. This claim is mostly true, but there is some important background necessary to understand what this means. First, we need to define “stress”. Functionally when researchers are talking about mental stress they mean one of two things, either the stress that results from an immediate physical threat, or the mental stress that results from engaging in a challenging mental task (like doing math in your head while being distracted). For practical purposes the research on EEGs and mental stress use the challenging mental task model.

It his, however, a good representation of stress generally? It is a convenient research paradigm, but how generalizable it is to mental stress is questionable. It can result in objective measures of physiological stress, such as secretion of stress hormones, which is partly why it’s convenient for research and not unreasonable, but it is only a representation of mental stress and might not translate to all “stressful” situations (like sports).

Can EEGs measure this type of mental stress? Yes – a relaxed mind with eyes closed produces a lot of regular alpha waves. A more active mind (and one with eyes open) produces more theta waves and chaotic brainwave activity. EEGs can therefore tell the difference between relaxed and active. How about not just active but stressed? That is trickier, but there are studies which appear to show some statistical differences in the wave patterns regionally with mental stress. So the premise that EEGs can measure certain kinds of mental stress is reasonable, but not as simple as often implied. This also does not necessarily mean that commercial devices claiming to measure EEG markers of stress work.

Continue Reading »

Comments: 0

Oct 08 2021

Hacking the Brain to Treat Depression

A new study published in Nature looks at a closed loop implanted deep brain stimulator to treat severe and treatment resistant depression, with very encouraging results. This is a report of a single patient, with is a useful proof of concept.

Severe depression can profoundly limit one’s life, and increase risk for suicide (affecting 300 million people worldwide and causing most of the 800,000 annual suicides). Depression, like many mental disorders, is very heterogenous, and is therefore not likely to be one specific disorder but a class of disorders with a variety of neurological causes. It also exists on a spectrum of severity, and it’s very likely that mild to moderate depression is phenomenologically different fromĀ  severe depression. Severe depression can also be in some cases very treatment resistant, which simply means our current treatment options are probably not addressing the actual brain function that is causing the severe depression. We clearly need more options.

The pharmacological approach to severe depression has been very successful, but still not effective in all patients. For “major” depression, which is severe enough to impact a person’s daily life, pharmacological therapy and talk therapy (such as CBT – cognitive behaviorial therapy) seem to be equally effective. But again, these are statistical comparisons. Treatment needs to be individualized.

Continue Reading »

Comments: 0

Next »