Oct
10
2024
How certain are you of anything that you believe? Do you even think about your confidence level, and do you have a process for determining what your confidence level should be or do you just follow your gut feelings?
Thinking about confidence is a form of metacognition – thinking about thinking. It is something, in my opinion, that we should all do more of, and it is a cornerstone of scientific skepticism (and all good science and philosophy). As I like to say, our brains are powerful tools, and they are our most important and all-purpose tool for understanding the universe. So it’s extremely useful to understand how that tool works, including all its strengths, weaknesses, and flaws.
A recent study focuses in on one tiny slice of metacognition, but an important one – how we form confidence in our assessment of a situation or a question. More specifically, it highlights The illusion of information adequacy. This is yet another form of cognitive bias. The experiment divided subjects into three groups – one group was given one half of the information about a specific situation (the information that favored one side), while a second group was given the other half. The control group was given all the information. They were then asked to evaluate the situation and how confident they were in their conclusions. They were also asked if they thought other people would come to the same conclusion.
You can probably see this coming – the subjects in the test groups receiving only half the information felt that they had all the necessary information to make a judgement and were highly confident in their assessment. They also felt that other people would come to the same conclusion as they did. And of course, the two test groups came to the conclusion favored by the information they were given.
Continue Reading »
Oct
08
2024
I’m going to do something I rarely do and make a straight-up prediction – I think we are close to having AI apps that will function as our all-purpose digital assistants. That’s not really a tough call, we already have digital assistants and they are progressing rapidly. So I am just extending an existing trend a little bit into the future. My real prediction is that they will become popular and people will use them. Predicting technology is often easier than predicting public acceptance and use (see the Segway and many other examples.) So this is more of a risky prediction.
I know, for those who lived through the early days of personal computers, if you mention “personal digital assistant” the specter of “Clippy” immediately comes to mind. Such assistants can be intrusive and annoying. That has something to do with the fact that they are intrusive and annoying, coupled with the fact that they are not that useful. Siri and similar apps are great for a few things – acting as a verbal interface for Google searchers or serving up music, or basic functions like setting an alarm on your phone. But I am talking next level. Siri is to the AI-fueled assistants I am talking about as the PDAs of the 80s and 90s are to the smartphones of today.
With that analogy I am getting into real tricky prediction territory – predicting a transformative or disruptive technology. This is the kind of technology that, shortly after using it regularly you lose the ability to conceive of life without it. Nor would you want to go back to the dark days before your life was transformed. Think microwave, the ability to record and play pre-recorded content for TV, the web, GPS, and the smartphone. This is what Segway wanted to be.
Continue Reading »
Oct
07
2024
Scientists have just published in Nature that they have completed the entire connectome of a fruit fly: Network statistics of the whole-brain connectome of Drosophila. The map includes 140,000 neurons and more than 50 million connections. This is an incredible achievement that marks a milestone in neuroscience and is likely to advance our research.
A “connectome” is a complete map of all the neurons and all the connections in a brain. The ultimate goal is to map the entire human brain, which has 86 billion neurons and about 100 trillion connections – that’s more than six orders of magnitude greater than the drosophila. The human genome project was started in 2009 through the NIH, and today there are several efforts contributing to this goal.
Right now we have what is called a mesoscale connectome of the human brain. This is more detailed than a macroscopic map of human brain anatomy, but not as detailed as a microscopic map at the neuronal and synapse level. It’s in between, so mesoscale. Essentially we have built a mesoscale map of the human brain from functional MRI and similar data, showing brain regions and types of neurons at the millimeter scale and their connections. We also have mesoscale connectomes of other mammalian brains. These are highly useful, but the more detail we have obviously the better for research.
We can mark progress on developing connectomes in a number of ways – how is the technology improving, how much detail do we have on the human brain, and how complex is the most complex brain we have fully mapped. That last one just got its first entry – the fruit fly or drosophila brain.
Continue Reading »
Oct
03
2024
It is now generally accepted that 66 million years ago a large asteroid smacked into the Earth, causing the large Chicxulub crater off the coast of Mexico. This was a catastrophic event, affecting the entire globe. Fire rained down causing forest fires across much of the globe, while ash and debris blocked out the sun. A tsunami washed over North America – one site in North Dakota contains fossils from the day the asteroid hit, including fish with embedded asteroid debris. About 75% of species went extinct as a result, including all non-avian dinosaurs.
For a time there has been an alternate theory that intense vulcanism at the Deccan Traps near modern-day India is what did-in the dinosaurs, or at least set them up for the final coup de grace of the asteroid. I think the evidence strongly favors the asteroid hypothesis, and this is the way scientific opinion has been moving. Although the debate is by no means over, a majority of scientists now accept the asteroid hypothesis.
But there is also a wrinkle to the impact theory – perhaps there was more than one asteroid impact. I wrote in 2010 about this question, mentioning several other candidate craters that seem to date to around the same time. Now we have a new candidate for a second KT impact – the Nadir crater off the coast of West Africa.
Geologists first published about the Nadir crater in 2022, discussing it as a candidate crater. They wrote at the time:
“Our stratigraphic framework suggests that the crater formed at or near the Cretaceous-Paleogene boundary (~66 million years ago), approximately the same age as the Chicxulub impact crater. We hypothesize that this formed as part of a closely timed impact cluster or by breakup of a common parent asteroid.”
Continue Reading »
Oct
01
2024
You have definitely heard of electronics. You may (if you are a tech nerd like me) have heard of spintronics and photonics. Now there is also the possibility of orbitronics. What do these cool-sounding words mean?
Electronic technology is one of those core technologies that has transformed our civilization. Prior to harnessing electricity and developing electrical engineering we essentially had steam punk – mechanical, steam-powered technology. Electronics and electricity to power them, however, opened the door to countless gadgets, from electric lights, appliances, handheld devices, and eventually computer technology and the internet. I am occasionally reminded of how absolutely essential electricity is to my daily life during power outages. I get a brief glimpse of a pre-electronic world and – well, it’s rough. And that’s just a taste, with the real drudgery prolonged life without power would require.
Increasingly electronic devices are computerized, with embedded chips, possibly leading to the “internet of things”. Data centers eat an increasing percentage of our power production, and the latest AI applications will likely dramatically increase that percentage. Power use is now a limiting factor for such technology. It’s one main argument against widespread use of cryptocurrencies, for example. To illustrate the situation, Microsoft has just cut a deal to reopen Unit 1 at the Three-Mile Island nuclear power plant (not the one that melted down, that was Unit 2) with an agreement to purchase all of its power output for 20 years – to power its AI data center.
Therefore there is a lot of research into developing computer hardware that is not necessarily faster, smaller, or more powerful but is simply more energy efficient. We are getting to the limits of physics with the energy efficiency of electronic computers, however. Software engineers are also focusing on this issue, trying to create more energy-efficient algorithms. But it would be nice if the hardware itself used less energy. This is one of the big hopes for developing high temperature superconductors, but we have no idea how long or if we will develop anything usable in computing.
Continue Reading »
Sep
30
2024
I can’t resist a good science story involving technology that we can possibly use to stabilize our climate in the face of anthropogenic global warming. This one is a fun story and an interesting, and potentially useful, idea. As we map out potential carbon pathways into the future, focusing on the rest of this century, it is pretty clear that it is going to be extremely difficult to completely decarbonize our civilization. This means we can only slow down, but not stop or reverse global warming. Once carbon is released into the ecosystem, it will remain there for hundreds or even thousands of years. So waiting for natural processes isn’t a great solution.
What we could really use is a way to cost-effectively at scale remove CO2 already in the atmosphere (or from seawater – another huge reservoir) to compensate for whatever carbon release we cannot eliminate from industry, and even to reverse some of the CO2 build up. This is often referred to as carbon capture and sequestration. There is a lot of research in this area, but we do not currently have a technology that fits the bill. Carbon capture is small scale and expensive. The most useful methods are chemical carbon capture done at power plants, to reduce some of the carbon released.
There is, however, a “technology” that cheaply and automatically captures carbon from the air and binds it up in solid form – trees. This is why there is much discussion of planting trees as a climate change mitigation strategy. Trees, however, eventually give up their captured carbon back into the atmosphere. So at best they are a finite carbon reservoir. A 2019 study found that if we restored global forests by planting half a trillion trees, that would capture about 20 years worth of CO2 at the current rate of release, or about half of all the CO2 released since 1960 (at least as of 2019). But once those trees matured we would reach a new steady state and further sequestering would stop. This is at least better than continuing to cut down forests and reducing their store of carbon. Tree planting can still be a useful strategy to help buy time as we further decarbonize technology.
Continue Reading »
Sep
26
2024
Of every world known to humans outside the Earth, Mars is likely the most habitable. We have not found any genuinely Earth-like exoplanets. They are almost sure to exist, but we just haven’t found any yet. The closest so far is Kepler 452-b, which is a super Earth, specifically 60% larger than Earth. It is potentially in the habitable zone, but we don’t know what the surface conditions are like. Within our own solar system, Mars is by far more habitable for humans than any other world.
And still, that’s not very habitable. It’s surface gravity is 38% that of Earth, it has no global magnetic field to protect against radiation, and its surface temperature ranges from -225°F (-153°C) to 70°F (20°C), with a median temperature of -85°F (-65°C). But things might have been different, and they were in the past. Once upon a time Mars had a more substantial atmosphere – today its atmosphere is less than 1% as dense as Earth’s. That atmosphere was not breathable, but contained CO2 which warmed the planet allowing for there to be liquid water on the surface. A human could likely walk on the surface of Mars 3 billion years ago with just a face mask and oxygen tank. But then the atmosphere mostly went away, leaving Mars the dry barren world we see today. What happened?
It’s likely that the primary factor was the lack of a global magnetic field, like we have on Earth. Earth’ magnetic field is like a protective shield that protects the Earth from the solar wind, which is charged so the particles are mostly diverted away from the Earth or drawn to the magnetic poles. On Mars the solar wind did not encounter a magnetic field, and it slowly stripped away the atmosphere on Mars. If we were somehow able to reconstitute a thick atmosphere on Mars, it too would slowly be stripped away, although that would take thousands of years to be significant, and perhaps millions of years in total.
Continue Reading »
Sep
24
2024
When we talk about reducing carbon release in order to slow down and hopefully stop anthropogenic global warming much of the focus is on the energy and transportation sectors. There is a good reason for this – the energy sector is responsible for 25% of greenhouse gas (GHG) emissions, while the transportation sector is responsible for 28% (if you separate out energy production and not include it in the end-user category). But that is just over half of GHG emissions. We can’t ignore the other half. Agriculture is responsible for 10% of GHG emissions, while industry is responsible for 23%, and residential and commercial activity 13%. Further, the transportation sector has many components, not just cars and trucks. It includes mass transit, rail, and aviation.
Any plan to deeply decarbonize our civilization must consider all sectors. We won’t get anywhere near net zero with just green energy and electric cars. It is tempting to focus on energy and cars because at least there we know exactly what to do, and we are, in fact, doing it. Most of the disagreement is about the optimal path to take and what the optimal mix of green energy options would be in different locations. For electric vehicles the discussion is mostly about how to make the transition happen faster – do we focus on subsidies, infrastructure, incentives, or mandates?
Industry is a different situation, and has been a tough nut to crack, although we are making progress. There are many GHG intensive processes in industry (like steel and concrete), and each requires different solutions and difficult transitions. Also, the solution often involves electrifying some aspect of industry, which works only if the energy sector is green, and will increase the demand for clean energy. Conservative estimates are that the energy sector will increase by 50% by 2050, but if we are successful in electrifying transportation and industry (not to mention all those data centers for AI applications) this estimate may be way off. This is yet another reason why we need an all-of-the-above approach to green energy.
Continue Reading »
Sep
19
2024
On the SGU we recently talked about aphantasia, the condition in which some people have a decreased or entirely absent ability to imagine things. The term was coined recently, in 2015, by neurologist Adam Zeman, who described the condition of “congenital aphantasia,” that he described as being with mental imagery. After we discussed in on the show we received numerous e-mails from people with the condition, many of which were unaware that they were different from most other people. Here is one recent example:
“Your segment on aphantasia really struck a chord with me. At 49, I discovered that I have total multisensory aphantasia and Severely Deficient Autobiographical Memory (SDAM). It’s been a fascinating and eye-opening experience delving into the unique way my brain processes information.
Since making this discovery, I’ve been on a wild ride of self-exploration, and it’s been incredible. I’ve had conversations with artists, musicians, educators, and many others about how my experience differs from theirs, and it has been so enlightening.
I’ve learned to appreciate living in the moment because that’s where I thrive. It’s been a life-changing journey, and I’m incredibly grateful for the impact you’ve had on me.”
Perhaps more interesting than the condition itself, and what I want to talk about today, is that the e-mailer was entirely unaware that most of the rest of humanity have a very different experience of their own existence. This makes sense when you think about it – how would they know? How can you know the subjective experience happening inside one’s brain? We tend to assume that other people’s brains function similar to our own, and therefore their experience must be similar. This is partly a reasonable assumption, and partly projection. We do this psychologically as well. When we speculate about other people’s motivations, we generally are just projecting our own motivations onto them.
Projecting our neurological experience, however, is a little different. What the aphantasia experience demonstrates is a couple of things, beginning with the fact that whatever is normal for you is normal. We don’t know, for example, if we have a deficit because we cannot detect what is missing. We can only really know by sharing other people’s experiences.
Continue Reading »
Sep
17
2024
In my book, which I will now shamelessly promote – The Skeptics’ Guide to the Future – my coauthors and I discuss the incredible potential of information-based technologies. As we increasingly transition to digital technology, we can leverage the increasing power of computer hardware and software. This is not just increasing linearly, but geometrically. Further, there are technologies that make other technologies more information-based or digital, such as 3D printing. The physical world and the virtual world are merging.
With current technology this is perhaps most profound when it comes to genetics. The genetic code of life is essentially a digital technology. Efficient gene-editing tools, like CRISPR, give us increasing control over the genetic code. Arguably two of the most dramatic science and technology news stories over the last decade have been advances in gene editing and advances in artificial intelligence (AI). These two technologies also work well together – the genome is a large complex system of interacting information, and AI tools excel at dealing with large complex systems of interacting information. This is definitely a “you got chocolate in my peanut butter” situation.
A recent paper nicely illustrates the synergistic power of these two technologies – Interpreting cis-regulatory interactions from large-scale deep neural networks. Let’s break it down.
Cis-regulatory interactions refer to several regulatory functions of non-coding DNA. Coding DNA, which is contained within genes (genes contain both coding and non-coding elements) directly code for amino acids which are assembled into polypeptides and then folded into functional proteins. Remember the ATCG four letter base code, with three bases coding for a specific amino acid (or coding function, like a stop signal). This is coding DNA. Noncoding DAN regulates how coding DNA is transcribed into proteins.
Continue Reading »