Dec 12 2022

Fusion Breakthrough – Ignition

Much of the discussion about how we are going to rapidly change over our energy infrastructure to low carbon energy involves existing technology, or at most incremental advancements. The problem is, of course, that we are up against the clock and the best solutions are ones that we can implement immediately. Even next generation fission reactors are controversial because they are not a tried-and-true technology, even though fission technology itself is. It certainly would not be prudent to count on an entirely new technology as our solution. If some game-changing technology emerges, great, but until then we will make due with what we know works.

The ultimate game-changing energy technology is, I think, fusion. Fusion technology replicates the processes that power stars, mostly fusing hydrogen into other forms of hydrogen and ultimately into helium. Massive enough stars can then fuse helium into heavier elements, with more massive stars fusing heavier elements until we get to iron which cannot be fused to produce net energy. But even fusing the lightest elements takes a tremendous amount of heat and pressure, which has proved technologically difficult to achieve on Earth. We have been inching closer to this goal, however, and recently the National Ignition Facility at the Lawrence Livermore National Laboratory in California has inched over a significant milestone – ignition.

I wrote just last year about the NIF achieving another milestone, burning plasma. The pace of advancement seemed pretty brisk, and I speculated about how long it would be to achieve the next milestone, ignition. Well, here we are. You can read that article for background, but quickly, the NIF uses a fusion method called inertial confinement – an array of 192 powerful lasers to produce inward pressure sufficient to cause a vessel to implode, with the implosion causing sufficient heat and pressure to produce fusion. The NIF was built in 2009, but it took significant upgrades before it was powerful enough to achieve fusion in 2021. Some of the energy from fusion contributed to further fusion, a process called burning plasma. But in that experiment fusion contributed only 70% of the energy necessary to sustain fusion. That means that the fusion process was still a net energy loss. (Those powerful lasers require a lot of energy.)

Continue Reading »

Comments: 0

Dec 08 2022

Ancient Environmental DNA

Published by under Evolution
Comments: 0

Our ability to detect, amplify, and sequence tiny amount of DNA has lead to a scientific revolution. We can now take a small sample of water from a lake, and by analyzing the environmental DNA in that water determine all of the things that live in the lake. This is an amazingly powerful tool. My favorite application of this technique was to demonstrate the absence of DNA in Loch Ness from any giant reptile or aquatic dinosaur. So-called eDNA is perhaps the most powerful evidence of a negative, the absence of a creature in an environment – you can’t hide your eDNA.

The ultimate limiting factor on eDNA is how long such DNA will survive. DNA has a half-life, it spontaneously degrades and sheds information, until it is no longer useful for sequencing. Previously scientists extracted DNA from ice cores in Greenland, and were able to sequence DNA up to 800,000 years old. The oldest DNA ever recovered was probably 1.1-1.2 million years old. Based on this  scientists estimated that the ultimate lifespan of usable DNA was about 1 million years. This put the final nail in the coffin of any dreams of a Jurassic park. Non-avian dinosaurs died out 65 million years ago, so none of their DNA should still be left on Earth (the closest we can get is related DNA in birds). But no T. rex DNA in amber.

According to a new assay in the most norther region of Greenland, however, we have to push back the estimate of how long DNA can survive to at least 2 million years. That is a significant increase (but still a long way from T. rex). The site is Kap København Formation located in Peary Land in north Greenland. This is now a barren frozen desert. There are also very few macrofossils here, mostly from a boreal forest and insects, with the only vertebrate being a hare’s tooth. Conditions there are apparently not conducive to fossilization. We do know that 2 million years ago Greenland was much warmer, about 10 degrees C warmer than present. So there is no reason it should not have been teeming with life.

The new analysis of eDNA finds that, in fact, it was. They found DNA from hares, but also other rodents, reindeer, geese, and mastodons. They also found DNA from poplars, birch trees, and thuja trees (a type of coniferous tree), as well as a rich assortment of bushes, herbs, and other flora. Basically this was a mixed forest with a rich ecosystem. In addition they found marine species including horseshoe crab and green algae, confirming the warmer climate.

This ancient eDNA gives us a much more complete picture of the ecosystem than was provided by macrofossils alone. But perhaps more importantly – it demonstrates that eDNA can survive for up to two million years, doubling the previous estimate. The researchers speculate that minerals in the soil bound to the DNA and stabilized it, slowing its degradation. DNA is negatively charged. This property is used to separate out chunks of DNA in a sample by size. You apply a magnetic field which attracts the DNA pieces, which move through a gel at a range proportional to their size. In this case the negatively charged DNA bound to positively charged minerals in the soil. I guess this is the DNA version of fossilization.

The question is – in such environments where DNA is stabilized by binding to minerals, how much is the degradation process slowed down, and therefore how long can DNA survive? DNA breaks down due to “microbial enzymatic activity, mechanical shearing and spontaneous chemical reactions such as hydrolysis and oxidation.” DNA breaks down faster with warmer temperature, so the fact that this DNA remained frozen for so long is crucial. But freezing alone was not enough, which is why scientists think that binding to minerals also played a role.

They measured the “thermal age” of the DNA – if the DNA were at a constant temperature of 10 degrees C how long would it have taken to degrade to its current state – at 2.7 thousand years, 741 times less than its actual age of 2 million years. Therefore it degraded 741 times slower then exposed DNA at 10 degrees C. The average temperature at the site is -17 degrees C. They further found that the DNA was bound mostly to clay minerals, and specifically smectite (and to a lesser degree, quartz).

Perhaps this is the limit of DNA survival – although we thought the previous record of 1.1-1.2 million years was the limit. It is possible there may be environmental conditions elsewhere in the world that could slow DNA degradation even further. Slow DNA degradation by a factor of 30 or so beyond the Kap København Formation and we are getting into the time of dinosaurs. This is probably unlikely. Constant freezing temperatures are required, in addition to geological stability and optimal soil conditions. But I don’t think we can say now that it is impossible, just highly unlikely. I did not see any estimate in the study about the ultimate upper limit of DNA lifespan, but I suspect we will see such analyses based on this latest information.

The best evidence, however, will come from simply looking in new locations for eDNA, especially those that likely have the optimal conditions for maximal DNA longevity. But for now, being able to reconstruct ecosystems from 2 million years ago is still pretty cool.

Comments: 0

Dec 06 2022

Mars More Volcanically Active Than We Thought

Published by under Astronomy
Comments: 0

Mars is perhaps the best candidate world in our solar system for a settlement off Earth. Venus is too inhospitable. The Moon is a lot closer, but the extremely low gravity (.166 g) is a problem for long-term habitation. Mars gravity is 0.38 g, still low by Earth standards but better than the Moon. But there are some other differences between Earth and Mars. Mars has only a very thin atmosphere, less than 1% that of Earth’s. That’s just enough to cause annoying sand storms, but not enough to avoid the need for pressure suits. Mars lost its atmosphere because it was stripped away by the solar wind – because Mars also does not have a global magnetic field to protect itself. The thin atmosphere and lack of magnetic field also exposes the surface to lots of radiation.

Mars’ smaller size also means that it cooled faster than the Earth. While there are ancient volcanoes on Mars, the surface crust looks solid, without plate tectonics. This has led astronomers to believe that Mars is a quiet planet, with heat at the core, but a solid crust and mantle and no geological activity. That also means there are no recent volcanic eruptions that might replenish its depleted atmosphere. However – that view is changing.

There is one region of Mars, Elysium Planitia, which may be geologically active. In fact, there is now good evidence of a giant mantle plume under the surface. A mantle plume occurs when hot magma from the core rises up through the mantle and pushes up against the overlying crust. There are more than 18 such mantle plumes on Earth. One is right below the Hawaiian islands – as the Pacific plate moves over this plume it creates a chain of volcanoes and resulting volcanic islands. What is the evidence for a mantle plume beneath Elysium Planitia?

Continue Reading »

Comments: 0

Dec 05 2022

Square Kilometer Array

Published by under Astronomy
Comments: 0

Construction begins this week on what will be the largest radio telescope in the world – the Square Kilometer Array (SKA). This project began more than 30 years ago, in 1991, as an idea, with an international working group forming in 1993. It took three decades to flesh out the concept, create a detailed design, secure the land rights, and secure government funding. The first antennas will go online by 2024 with more added through 2028 (which will complete the first phase – about 10% of the total planned project). This will result in a radio telescope array with a total area of one square kilometer.

There are actually two components to the total array. One is being built in Australia, the SKA-Low, for low frequency. These will use antennas that look like two-meter tall metal Christmas trees. There will be 500 arrays of 256 antennas for a total of 131,000 antennas. This will be the low frequency array, able to detect radio waves between 50 megahertz and 350 megahertz. There will also be SKA-Mid in South Africa, which will be an array of 197 dishes sensitive between 350 megahertz and 15.4 gigahertz. The whole thing will be connected together, with the bulk of the computing power located in the UK.

Why do astronomers connect radio receivers together? This has to do with interferometry – the ability to combine two signals so that they can simulate a single receiver with a diameter equal to the distance between the two receivers.  It’s not the same as having one giant dish, however. An array increases the resolution of the received image, but the sensitivity is still a function of the total receiving area (not the distance). The Very Large Array (VLA) in New Mexico has radio dishes on rails, so that they can be moved into different configuration. By moving the dishes apart you can achieve greater resolution, but by moving them closer together you get greater precision – so there is a trade-off from moving receivers farther apart. There is no substitute for total collecting area, which is why the SKA will have so many individual receivers.

Continue Reading »

Comments: 0

Dec 02 2022

Evolution Is Not a Straight Line

Published by under Evolution
Comments: 0

Yesterday I wrote about the fact that technological development is not a straight line, with superior technology replacing older technology. That sometimes happens, but so do many other patterns of change. Often competing technologies have a suite of relative strengths and weaknesses, and its hard to predict which one will prevail. Also, competing technologies may exist side-by-side for long periods of time. Sometimes, after experimenting with new technologies, people may revert to older and simpler methods because they are in the mood for a different set of tradeoffs.

Similarly, biological evolution is not a simple straight line with “more advanced” species replacing more primitive ones. Adaption to the local environment is a relative thing, and many biological features have a complex set of tradeoffs. With technological evolution (any cultural evolution) ideas can come from anywhere and spread in any pattern (although some are more likely than others). Biological evolution is more constrained. It can only work with the material it has at hand, and information is passed down mostly vertically, from parents to child. But there is also horizontal gene transfer in evolution, there is hybridization, and even back mutations. The overall pattern is a complex branching bush, spreading out in many directions. Any long term directionality in evolution is likely just an epiphenomenon.

Paleontologists try to reverse engineer the multitudes of complex branching bushes of evolutionary relationships using an incomplete fossil record and, more recently, genetic analysis. But this can be extremely difficult because it may not always be obvious how to draw the lines to connect the dots. The simplest or most obvious pattern may not be true. A recent discovery involving bird evolution highlights this fact. It is now pretty well established that birds evolved from theropod dinosaurs. The evidence is overwhelming and convincing. Creationists, who predicted that birds would forever remain an isolated group, have egg on their face.

Continue Reading »

Comments: 0

Dec 01 2022

Ancient Shipwreck Reveals Complex Trade Network

People tend to understand the world through the development of narratives – we tell stories about the past, the present, ourselves, others, and the world. That is how we make sense of things. I always find it interesting, the many and often subtle ways in which our narratives distort reality. One common narrative is that the past was simpler and more primitive than it actually was, and that progress is linear, objective, and inevitable. I remember watching The Day the Universe Changed with James Burke when in one episode he declared that the Dark Ages were a time of great technological advancement. This seemed at odds with what I had been told, but I later confirmed this view that the so-called “Dark Ages” were maligned by later Renaissance writers congratulating their own progress.

The same is true of our image of technological advancement, that it’s objective and inevitable. This became more clear to me when researching my latest book, The Skeptics’ Guide to the Future. One story in particular is the sequence of the material ages – the stone age giving way to the copper age, then bronze age, and finally iron age. Metallurgy was clearly a huge technological advance, and did progress significantly over time. But this sequence was not strictly linear, older technologies persisted alongside newer technologies for different applications, and sometimes technological shifts are more of a lateral move than a clear advance.

The biggest example from the sequence above is the transition from relying mainly on bronze for tools and weapons to iron. Iron, it turns out, is not objectively better than bronze for many applications. Bronze is actually a very useful metal – it can be cast, it is easy to work with, it is strong, and it doesn’t rust. That last feature, not rusting, makes it superior to iron for many applications, even into the Renaissance (until the development of stainless steel). Bronze is actually stronger than iron and can be worked more easily, at a lower temperature. Until the development of carbon steel, there was no reason to favor iron over bronze. Why, then, did the change happen?

Continue Reading »

Comments: 0

Nov 28 2022

The Challenge of Green Aviation

There is some good new when it comes to decarbonizing our civilization (reducing the amount of CO2 from previously sequestered carbon that our industries release into the atmosphere) – we already have the technology to accomplish most of what we need to do. Right now the world’s electricity generation is 63.3% from fossil fuels. We have the technology, through wind, solar, geothermal, hydroelectric, and nuclear power, to completely replace this if we wanted to.  We can debate the quickest and most cost-effective path, but there are many options that will work.

About 84.3% of total energy used by the world, however, is from fossil fuel. This includes not only electricity, but transportation, heating, and industrial use (other than through electricity). Of the transportation sector, 92% is ground vehicle (cars, trucks, and shipping). Battery electric vehicle technology is now more than capable of being the primary option for most users, with ranges >300 miles for passenger cars and 500 miles for shipping. Prices still need to come down, but they will as production ramps up.

Another way to look at this is that 73.2% of our carbon footprint comes from all energy, 18.4% from agriculture, 3.2% from waste, and 5.2% from direct industrial processes (like making cement and steel). Agricultural, waste, and industrial sources of carbon are complex, and these mostly require technological advances that we will hopefully chip away at over the next few decades. But we can rapidly eliminate that 73.2% from energy if we want to, with the exception of the 8% of transportation carbon from aviation. That remains a tough nut to crack.

The challenge of aviation is that jets and planes need to be light and have limits on size, so they require an energy source that has high energy density (energy per volume) and specific energy (energy per mass), more so than ground transportation. Right now the optimal fuel for those two features is hydrocarbons. This means that the best option for greener aviation is using biofuels (sustainable aviation fuel). Biofuels can be used with existing aircraft and have similar energy density and specific energy to existing fuels. The carbon footprint is usually not zero, but is much lower than fossil fuels. The carbon footprint of biofuels depends on the feedstock used and the methods of growing used. There are also land and water-use issues with mass-producing biofuels for aviation or other purposes. The best options are those that use waste feedstock.

Continue Reading »

Comments: 0

Nov 23 2022

Closed Loop Pumped Hydro

I have been writing a lot recently about global warming and energy infrastructure. This is partly because there is a lot of news coming out of COP27, but also because both here and on the SGU there has been some lively and informative discussion on the issue. Also, this is a very complex issue and as people raise new points it sends me down different rabbit holes of information. I am trying to develop the most complete and objective picture I can of the situation.

The goal, of course, is to rapidly decarbonize the energy infrastructure of the world. We not only need to do this, we need to do it quickly and cost-effectively. Further, we need a plan for the next 30 years, and essentially we don’t have any second chances left. If we want to stay as far below 2.0 C temperature rise as possible, and even shoot for that rapidly fading hope of keeping below 1.5 C, then we have one shot. This means that if we have to course correct after 20 years, this may still improve the situation but will likely be too late to meet our climate goals.

I find that the most compelling arguments from experts to be those who advocate essentially doing everything. We should pick the low-hanging fruit, do all the win-wins, but also hedge our bets. If anything we want to overshoot.

One contentious issue has been whether or not it is feasible and advisable to plan on a 100% renewable energy infrastructure. The conversation gets complicated by some technical terms, so let me define them here.

Continue Reading »

Comments: 0

Nov 22 2022

Genes and Language

There are now approximately 8 billion people on the planet. In addition, there are over 7,100 languages spoken on Earth. One question for anthropologists and linguistic experts is – how closely do genetic relationships match language relationships. Both language and genes are generally inherited from our parents – well, genes absolutely, but language generally. It makes sense that a map of genetic relatedness would closely follow a map of linguistic relatedness. If we zoom out from a single family to a population, the question becomes a bit more complex. Populations can mix genes with other populations. Two populations that derived relatively recently from a common population will likely be genetically similar, and even if their current languages differ, they too likely share a common root and therefore lots of similarities.

What happens, then, when scientists overlay the genetic and linguistic maps of humanity? A recent study does just that. To do this they compiled a massive database, called GeLaTo, or Genes and Languages Together. GeLaTo includes data from “4,000 individuals speaking 295 languages and representing 397 genetic populations.” That is fairly robust, but there is also lots of room for continuing to add information to the database to add more precision and detail to any analysis.

What they found is that the match between genes and language is very good, about 80%. However, that still leaves 20% of identified genetic populations with a language mismatch. How does this happen? It doesn’t take much imagination to think of a scenario where a population takes on the language of another population in their region that is genetically distinct. For example:

Some peoples on the tropical eastern slopes of the Andes speak a Quechua idiom that is typically spoken by groups with a different genetic profile who live at higher altitudes. The Damara people in Namibia, who are genetically related to the Bantu, communicate using a Khoe language that is spoken by genetically distant groups in the same area. And some hunter-gatherers who live in Central Africa speak predominantly Bantu languages without a strong genetic relatedness to the neighboring Bantu populations.

Continue Reading »

Comments: 0

Nov 21 2022

Artificial Muscles

There are some situations in which biology is still vastly superior to any artificial technology. Think about muscles. They are actually quite amazing. They can rapdily contract with significant force and then immediately relax. They can also vary their contraction strength smoothly along a wide continuum. Further, they are soft and silent. No machine can come close to their functionality.

In engineering parlance, a muscle is an actuator – a component that causes part of the machine to move. Boston Dynamics has produced some impressive results using standard actuators, but even their robots’ movements tend to be, well, robotic – a bit jerky and stilted. Compare that to the movements of a jaguar, for example. Engineers have been working on developing muscle-like actuators for years, with some progress but far from ultimate success.

One of the properties of a biological muscle is called the force-velocity relationship – the faster the muscle fibers contract the more power they produce. A second is the force-length relationship, essentially the longer the muscle the more power it creates. As a recent study points out:

However, it still remains a challenge to realize both intrinsic muscle-like force-velocity and force-length properties in one single actuator simultaneously.

In addition to these properties, to be more muscle-like we would need an actuator that can smoothly vary its power and also have soft components. There are other important properties, such as intrinsic response to load (does the system react to a load by contracting), static force (maintaining a load without moving), and the strength of the material used (how much of a strain can it take). Researchers, therefore, have been essentially trying to duplicate the structure and function of actual muscle to achieve all these properties. In the above study, for example:

This study presents a bioinspired soft actuator, named HimiSK (highly imitating skeletal muscle), designed by spatially arranging a set of synergistically contractile units in a flexible matrix similar to skeletal musculature. We have demonstrated that the actuator presents both intrinsic force-velocity and force-length characteristics that are very close to biological muscle with inherent self-stability and robustness in response to external perturbations.

Continue Reading »

Comments: 0

« Prev - Next »