Oct 15 2021

Superionic Ice and Magnetic Fields

Published by under Astronomy
Comments: 0

Some planets have planetary magnetic fields, while others don’t. Mercury has a weak magnetic field, while Venus and Mars have no significant magnetic field. This was bad news for Mars (or any critters living on Mars in the past) because the lack of a significant magnetic field allowed the solar wind to slowly strip away most of its atmosphere. Life on Earth enjoys the protection of a strong planetary magnetic field, protecting us from solar radiation.

The Earth’s magnetic field is created by molten iron in the outer core. Rotating electrical charges generate magnetic fields, and iron is a conductive material. This is called the dynamo theory, with the vast momentum of the spinning iron core translating some of its energy into generating a magnetic field. In fact, the molten outer core is rotating a little faster than the rest of the Earth. This phenomenon is also likely the source of Mercury’s weak magnetic field.

The two gas giants have magnetic fields, with Jupiter having the strongest field of any planet (the sun has the strongest magnetic field in the solar system). It’s largest moon, Ganymede, also has a weak magnetic field, making it the only moon in our solar system to have one. Jupiter’s magnetic field is 20,000 times stronger than Earth’s. It’s massive and powerful. The question is – what is generating the magnetic field inside Jupiter? It’s probably not a molten iron core, like on Earth. Based on Jupiter’s mass and other features, astronomers suspect that the magnetic field is generated by liquid hydrogen in its core. Under extreme pressure, even at high temperatures, hydrogen can become a metallic liquid, capable of carrying a charge, and therefore generating a magnetic field. This is likely also the source of Saturn’s magnetic field, although it’s field is slightly weaker than Earth’s.

Continue Reading »

Comments: 0

Oct 14 2021

Lack of Infrastructure Killed Early Electric Car

At the turn of the 19th century there were three relatively equal contenders for automobile technology, electric cars, steam powered, and the internal combustion engine (ICE). It was not obvious at the time which technology would emerge dominant, or even if they would all continue to have market share. By 1905, however, the ICE began to dominate, and by 1920 electric cars fell out of production. The last steam car company ended production in 1930, perhaps later than you might have guessed.

This provides an excellent historical case for debate over which factors ultimately determined the winner of this marketplace competition (right up there with VHS vs Betamax). We will never definitively know the answer – we can’t rerun history with different variables to see what happens. Also, the ICE won out the world over because the international industry consolidated around that choice, meaning that other countries were not truly independent experiments.

The debate comes down to internal vs external factors – the inherent attributes of each technology vs infrastructure. Each technology had its advantages and disadvantages. Steam engines worked just fine, and had the advantage of being flexible in terms of fuel. These were external combustion engines, as the combustion took place separately, outside the engine itself. But they also needed a boiler, which produced the steam to power the engine. Steam cars were more powerful than ICE cars, and also quieter and (depending on their configuration) produced less pollution. They had better torque characteristics, obviating the need for a transmission. The big disadvantage was that they needed water for the boiler, which required either a condenser or frequent topping off. They could also take a few minutes to get up to operating temperature, but this problem was solved in later models with a flash boiler.

Continue Reading »

Comments: 0

Oct 12 2021

Making Proteins with Plant Molecular Farming

As the world is contemplating ways to make its food production systems more efficient, productive, sustainable, and environmentally friendly, biotechnology is probably our best tool. I won’t argue it’s our only tool – there are many aspects of agriculture and they should all be leveraged to achieve our goals. I simply don’t think that we should take any tools off the table because of misguided philosophy, or worse, marketing narratives. The most pernicious such philosophy is the appeal to nature fallacy, where some arbitrary and vague sense of what is “natural” is used to argue (without or even against the evidence) that some options are better than others. We don’t really have this luxury anymore. We need to follow the science.

Essentially we should not fear genetic technology. Genetically modified and gene edited crops have proven to be entirely safe and can offer significant advantages in our quest for better agriculture. The technology has also proven useful in medicine and industry through the use of genetically modified microorganisms, like bacteria and yeast, for industrial scale production of certain proteins. Insulin is a great example, and is essential to modern treatment of diabetes. The cheese industry is mostly dependent on enzymes created with GMO organisms.

This, by the way, is often the “dirty little secret” of many legislative GMO initiatives. They usually include carve out exceptions for critical GMO applications. In Hawaii, perhaps the most anti-GMO state, their regulations exclude GMO papayas, because they saved the papaya industry from blight, and Hawaii apparently is not so dedicated to their anti-GMO bias that they would be willing to kill off a vital industry. Vermont passed the most aggressive GMO labeling law in the States, but made an exception for the cheese industry. These exceptions are good, but they show the hypocrisy in the anti-GMO crowd – “GMO’s are bad (except when we can’t live without them)”.

Continue Reading »

Comments: 0

Oct 11 2021

Neurofeedback Headbands for Stress Reduction

A recent BBC article discusses the emergence of products designed for neurofeedback to aid in stress reduction. The headline asks, “Smart headbands claim to make people calmer. Do they work?” However, the article does not really answer the question, or even get to the heart of the issue. It mostly provide anecdotes and opinions without putting the technology into a clear context. The article focuses mainly on the use of such devices to allegedly improve sports performance.

There are a few premises on which the claims made for such devices are based, varying from well established to questionable. One premise is that we can measure “stress” in the brain using an electroencephalograph (EEG) to measure the electrical activity in the brain. This claim is mostly true, but there is some important background necessary to understand what this means. First, we need to define “stress”. Functionally when researchers are talking about mental stress they mean one of two things, either the stress that results from an immediate physical threat, or the mental stress that results from engaging in a challenging mental task (like doing math in your head while being distracted). For practical purposes the research on EEGs and mental stress use the challenging mental task model.

It his, however, a good representation of stress generally? It is a convenient research paradigm, but how generalizable it is to mental stress is questionable. It can result in objective measures of physiological stress, such as secretion of stress hormones, which is partly why it’s convenient for research and not unreasonable, but it is only a representation of mental stress and might not translate to all “stressful” situations (like sports).

Can EEGs measure this type of mental stress? Yes – a relaxed mind with eyes closed produces a lot of regular alpha waves. A more active mind (and one with eyes open) produces more theta waves and chaotic brainwave activity. EEGs can therefore tell the difference between relaxed and active. How about not just active but stressed? That is trickier, but there are studies which appear to show some statistical differences in the wave patterns regionally with mental stress. So the premise that EEGs can measure certain kinds of mental stress is reasonable, but not as simple as often implied. This also does not necessarily mean that commercial devices claiming to measure EEG markers of stress work.

Continue Reading »

Comments: 0

Oct 08 2021

Hacking the Brain to Treat Depression

A new study published in Nature looks at a closed loop implanted deep brain stimulator to treat severe and treatment resistant depression, with very encouraging results. This is a report of a single patient, with is a useful proof of concept.

Severe depression can profoundly limit one’s life, and increase risk for suicide (affecting 300 million people worldwide and causing most of the 800,000 annual suicides). Depression, like many mental disorders, is very heterogenous, and is therefore not likely to be one specific disorder but a class of disorders with a variety of neurological causes. It also exists on a spectrum of severity, and it’s very likely that mild to moderate depression is phenomenologically different from  severe depression. Severe depression can also be in some cases very treatment resistant, which simply means our current treatment options are probably not addressing the actual brain function that is causing the severe depression. We clearly need more options.

The pharmacological approach to severe depression has been very successful, but still not effective in all patients. For “major” depression, which is severe enough to impact a person’s daily life, pharmacological therapy and talk therapy (such as CBT – cognitive behaviorial therapy) seem to be equally effective. But again, these are statistical comparisons. Treatment needs to be individualized.

Continue Reading »

Comments: 0

Oct 07 2021

Map of the Primary Motor Cortex Published

By now, especially if you are a regular reader here, you have probably heard of the connectome project, an attempt to entirely map the cells and connections of the human brain. This goal is actually comprised of multiple initiatives, one of which is the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) funded by the NIH. They have now published in Nature their first major result – a map of the mammalian primary motor cortex (technically a “multimodal cell census and atlas of the mammalian primary motor cortex”).

The goal of this initiative is to break the brain down into its constituent parts and then see how they all fit together. This begins with knowing all the different brain cell types, and this is part of the string of publications they have produced. The brain contains about 160 billion cells, with 87 billion neurons and the rest astrocytes (which provide supporting and modulating functions). There are many different kinds of neurons, with significant functional differences. Neurons differ in their structure and their chemistry.

The basic structure of a neuron is a cell body with dendrites (hair-like projections) for incoming signals and axons (longer projections) for outgoing signals. But the shape, number, and arrangement of dendrites and axons can vary considerably, and reflect their function, which relates to the pattern of connections the neuron makes. Neurons also differ in terms of their biochemistry – which neurotransmitters do they make, and which neurontransmitter receptors they have. Some neurotransmitters like glutamate are activating (make neurons fire faster) and others like GABA are inhibitory (make them fire slower or not at all).

Continue Reading »

Comments: 0

Oct 05 2021

2021 Nobel Prize in Physics

Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi share this year’s Nobel Prize in Physics for their work increasing our understanding of how complex systems work. This is a powerful tool for understanding the world, which reminds me of previous advances in our understanding of how gases behave.

Gases are a phase of matter in which high energy particles are bouncing around at random. It would be impossible to predict the pathway of any individual gas molecule. However, collectively all of this random complexity follows very predictable laws. Similarly, weather is a very complex system. We can predict weather that is about to happen, but beyond a few days it becomes increasingly difficult. The system is simply too chaotic. However, climate (long term weather trends) follows theoretically predictable patterns. The trick is to see the hidden patterns in the chaos, and that is the work that these three physicists did.

Manabe and Hasselmann share half the prize for their work on climate models:

Syukuro Manabe demonstrated how increased levels of carbon dioxide in the atmosphere lead to increased temperatures at the surface of the Earth. In the 1960s, he led the development of physical models of the Earth’s climate and was the first person to explore the interaction between radiation balance and the vertical transport of air masses. His work laid the foundation for the development of current climate models.

About ten years later, Klaus Hasselmann created a model that links together weather and climate, thus answering the question of why climate models can be reliable despite weather being changeable and chaotic. He also developed methods for identifying specific signals, fingerprints, that both natural phenomena and human activities imprint in the climate. His methods have been used to prove that the increased temperature in the atmosphere is due to human emissions of carbon dioxide.

Continue Reading »

Comments: 0

Oct 04 2021

Incremental Advance for Quantum Computing

Quantum computing is an exciting technology with tremendous potential. However, at present that is exactly what it remains – a potential, without any current application. It’s actually a great example of the challenges of trying to predict the future. If quantum computing succeeds, the implications could be enormous. But at present, there is no guarantee that quantum computing will become a reality, and if so how long it will take. So if we try to imagine the world 50 or 100 years in the future, quantum computing is a huge variable we can’t really predict at this point.

The technology is moving forward, but significant hurdles remain. I suspect that for the next 2-3 decades the “coming quantum computer revolution” will be similar to the “coming hydrogen economy,” in that it never came. But the technology continues to progress, and it might come yet.

What is quantum computing? Here is the quick version – a quantum computer exploits the weird properties of quantum mechanics to perform computing operations. Instead of classical “bits” where a unit of information is either a “1” or “0”, a quantum bit (or qubit) is in a state of quantum superposition, and can have any value between 0 and 1 inclusive. This means that each qubit contains a vastly greater amount of information than a classical bit, especially as they scale up in number. A theoretical quantum computer with one million qubits could perform operations in minutes that would take a universe full of classical supercomputers billions of years to perform (in other words, operations that are essentially impossible for classical computers). It’s no wonder that IBM, Google, China, and others are investing heavily in this technology.

But there are significant technological hurdles that remain. Quantum computer operations leverage quantum entanglement (where the physical properties of two particles are linked) among the qubits in order to get to the desired answer, but that answer is only probabilistic. In order to know that the quantum computer is working at all, researchers check the answers with a classical computer. Current quantum computers are running at about a 1% error rate. That sounds low, but for a computer it’s huge, essentially rendering the computer useless for any large calculations (the ones that quantum computers would be useful for).

Continue Reading »

Comments: 0

Oct 01 2021

Active Learning Is Best

Published by under Education
Comments: 0

There is pretty broad agreement that the pandemic was a net negative for learning among children. Schools are an obvious breeding ground for viruses, with hundreds or thousands of students crammed into the same building, moving to different groups in different classes, and with teachers being systematically exposed to many different students while they spray them with their possibly virus-laden droplets.  Wearing masks, social distancing, and using plexiglass barriers reduces the spread, but not enough if we are in the middle of a pandemic surge. Only vaccines will make schools truly safe.

So it was reasonable, especially in the early days of the pandemic, to convert schooling to online classes until the pandemic was under control. The problem was – most schools were simply not ready for this transition. The worst problem were those student who did not have access to a computer and the internet from home. The pandemic helped expose and exacerbate the digital divide. But even for students with good access, the experience was generally not good. Many teachers were not prepared to adapt their classes for online learning. Many parents did not have ability to stay at home with their kids to monitor them. And many children were simply bored and not learning.

This is a classic infrastructure problem. Many technologies do not function well in a vacuum. You can’t have cars without roads, traffic control, licensing, safety regulations, and fueling stations. Mass online learning also requires significant infrastructure that we simply didn’t have.

Continue Reading »

Comments: 0

Sep 30 2021

YouTube Bans Anti-Vax Videos

Last year YouTube (owned by Google) banned videos spreading misinformation about the COVID vaccines. The policy resulted in the removal of over 130,000 videos since last year. Now they announce that they are extending the ban to misinformation about any approved vaccine. The move is sure to provoke strong reactions and opinions on both sides, which I think is reasonable. The decision reflects a genuine dilemma of modern life with no perfect solution, and amounts to a “pick your poison” situation.

The case against big tech companies who control massive social media outlets essentially censoring certain kinds of content is probably obvious. That puts a lot of power into the hands of a few companies. It also goes against the principle of a free market place of ideas. In a free and open society people enjoy freedom of speech and the right to express their ideas, even (and especially) if they are unpopular. There is also a somewhat valid slippery slope argument to make – once the will and mechanisms are put into place to censor clearly bad content, mission creep can slowly impede on more and more opinions.

There is also, however, a strong case to be made for this kind of censorship. There has never been a time in our past when anyone could essentially claim the right to a massive megaphone capable of reaching most of the planet. Our approach to free speech may need to be tweaked to account for this new reality. Further, no one actually has the right to speech on any social media platform – these are not government sites nor are they owned by the public. They are private companies who have to the right to do anything they wish. The public, in turn, has the power to “vote with their dollars” and choose not to use or support any platform whose policies they don’t like. So the free market is still in operation.

Continue Reading »

Comments: 0

Next »