Sep
20
2022
Carbon is an extremely useful element. Carbon-containing compounds can be food, fuel, fertilizer, or building material. We also have an overabundance of carbon in the form of CO2 in the atmosphere, with industry producing over 34 billion tons per year. This is why one of the current technological “holy grails” is to develop a cost and energy efficient method of recapturing that carbon and feeding it into a useful production stream at industrial scale. This way a pollutant can be turned into product.
The problem is that CO2 is a stable molecule, and so it costs a lot of energy to break it apart – reversing reactions that produce the energy in the first place. Specifically, we need to split one oxygen off the CO2 to make CO (carbon monoxide). CO can be used in a variety of useful chemical reactions, making hydrocarbons, for example. The way to make reactions happen on useful industrial scales is with catalysts – a molecule that makes a reaction go faster (often by orders of magnitude). Of course the reaction also requires energy, because we want to go from a low energy molecule (CO2) to higher energy molecules (CO and O2). The challenge has been bringing all these elements together.
A new study introduces a new element into the equation – DNA. This may seem counterintuitive at first, but it makes sense when you put the whole picture together. Researchers at MIT were trying to crack this very specific problem – how do they bring together CO2 dissolved in liquid with a catalyst on the surface of an electrode that will be providing the energy? All these elements need to come together in the most efficient way. Further, catalysts can tend to break down with use, and we also need to get the old catalysts off the electrode and replace them with fresh catalyst. You can do this just by diffusing CO2 and catalyst in the liquid with the electrode and let randomness get it done, but this is highly inefficient, and efficiency is the game.
Continue Reading »
Sep
19
2022
One of the things I enjoy about writing this blog is that it is a conversation. My essay is often just the opening salvo in what turns into an interesting exchange on the topic, and I often learn new facts, gain deeper insight, and if nothing else get better at communicating my ideas. This is why I have a high tolerance for commenters with very different views. I do get rid of the worst trolls that I find are destructive to the conversation, but as my regular commenters know, I set a pretty high bar. I do recommend everyone try to engage meaningfully with other commenters and not just try to “win” with snark and insults. If we all agreed here, the comments would be pretty boring.
Sometimes, however, I feel like I have enough to say in response to the comments that a follow up post is warranted. The conversation about AI art is one of those times, partly because the conversation focused on elements of my post that I feel were ancillary. My post was not really about art. It was about how we respond to disruptive technology, and one way in which some technologies are disruptive. Specifically some technologies automate the technical aspects of creation, rendering obsolete (or at least to a much diminished role) entire sets of skills. My three examples were woodworking, photography, and the recent AI algorithms that can generate art.
In response some commenters noted that crafting a chair from wood is not art. Unfortunately this lead to a discussion about “what is art”, which is interesting, but entirely misses the point. That was not the analogy, and crafting furniture does not have to be art for my analogy to hold. The point was, that a profession of skilled artisans was essentially rendered obsolete by modern technology. Sure, there are people who keep this craft alive, and there is a high-end market for hand-crafted items. But the industry has fundamentally changed. A 19th century woodworker would have a hard time finding employment outside a historical village.
Continue Reading »
Sep
16
2022
It is difficult to project costs into the future, because there are many variables and small errors magnify over time. But still, statistical modeling can be done and validated to produce reliable estimates that can at least inform our discussion. There have been many methods of modeling the cost of global warming vs the cost of transitioning to net-zero carbon. In general they find that, while there will be costs to transitioning to green technology, there will also be overall savings from reducing global warming.
A new study takes a different approach from previous one – they do not consider the effects of global warming at all, but rather only consider the cost of energy itself. This is basically an ROI approach – we will need to invest a lot of money in new infrastructure, but as a result we will have cheaper electricity, so how does that net out. The bottom line is that under every scenario they consider, transitioning to green energy technologies will save billions of dollars per year in energy costs, and trillions over the entire transition. But let’s look at some of the variables they have to consider.
One thing they did different than prior economic analyses was to try to more accurately model the future costs of green technologies (wind, solar, batteries). Other studies take a conservative approach, but they have all underestimated the decreasing costs of these technologies. So the researchers in the new study more accurately modeled past predictions compared to actual costs and came up with a more accurate model that they validated with historical data. More accurately modeling the likely future decrease in the costs of these technologies increased the likely savings from transitioning to them.
Continue Reading »
Sep
15
2022
Recently an artist names Jason Allen won the Colorado State Art Fair’s competition in the category of digital art with a picture (shown) that was created by an AI, the Midjourney software. This has triggered another round of angst over computers taking our jobs. Some have declared it the end of art, or that it will destroy the jobs of working artists. This development can certainly be a job-killer, but we have to get over it. This, in my opinion, is just an extension of the advance of technology, which ruthlessly destroys jobs while creating new jobs and opportunities. We should not waste a moment shedding tears about those lost jobs, but rather put our energy into adapting to the new reality.
I do think it is reasonable to consider AI artists as just another form of automation and using tools to enhance our ability to create stuff. We can go back to just before the industrial revolution, when, for example, a highly skilled wood worker would make a chair entirely by hand. Even by then automation had had an effect – a productive shop would likely have an assembly line where specialists focused on different aspects of making the chair. Lathes and other tools were used to speed the process and improve precision, but still a great deal of technical skill, developed over years, was required. But soon the job of the highly skilled woodworker would be destroyed (outside of historical theme parks) by machines. A high-quality wooden chair can be made without the need of a single skilled woodworker, assembled by people who only need the skill to operate the machinery. At the time such products were denigrated as cheap knockoffs for the masses.
There are countless such examples. Getting closer to the artistic realm – do you take photographs with either your phone or a dedicated camera? Do you manually set the ISO, f-stop, aperture setting, or measure ambient light levels? Unless you are a professional photographer, the answer is likely no. Computer chips in the camera can do all of that for you. Even professional photographers will use these automated features – they rarely will measure light levels, for example, but let the camera do it. The point is that technology has reduced the technical skill necessary to take a good picture. Now, all you have to do is focus on the composition – the more artistic and creative aspects of photography.
Continue Reading »
Sep
13
2022
There is ongoing debate as to the extent that a skeptical outlook is natural vs learned in humans. There is no simple answer to this question, and human psychology is complex and multifaceted. People do demonstrate natural skepticism toward many claims, and yet seem to accept with abject gullibility other claims. For adults it can also be difficult to tease out how much skepticism is learned vs innate.
This is where developmental psychology comes in. We can examine children of various ages to see how they behave, and this may provide a window into natural human behavior. Of course, even young children are not free from cultural influences, but it at least can provide some interesting information. A recent study looked at two related questions – to children (ages 4-7) accept surprising claims from adults, and how do they react to those claims. A surprising claim is one that contradicts common knowledge that even a 4-year old should know.
In one study, for example, an adult showed the children a rock and a sponge and asked them if the rock was soft or hard. The children all believed the rock was hard. The adult then either told them that the rock was hard, or that the rock was soft (or in one iteration that the rock was softer than the sponge). When the adult confirmed the children’s beliefs, they continued in their belief. When the adult contradicted their belief, many children modified their belief. The adult then left the room under a pretense, and the children were observed through video. Unsurprisingly, they generally tested the surprising claims of the teacher through direct exploration.
This is not surprising – children generally like to explore and to touch things. However, the 6-7 year-old engaged in (or proposed during online versions of the testing) more appropriate and efficient methods of testing surprising claims than the 4-5 year-olds. For example, they wanted to directly compare the hardness of the sponge vs the rock.
Continue Reading »
Sep
12
2022
Research into conspiracy beliefs reveals that there are basically two kinds of people who believe in conspiracies. One type is the dedicated conspiracy theorist. For them, the conspiracy is what they are interested in. They never met a conspiracy theory they didn’t like, and they believe pretty much all of them. It’s part of their cognitive makeup. Others, however, are opportunistic conspiracy theorists – they believe one or two conspiracies that align with their ideology or tribe. Rosie O-Donnell is a 9/11 truther probably because it aligns with her politics. (As and aside, I can’t help thinking of her “fire melt steel” quote every time I see someone burn their steel on Forged in Fire.)
We are now facing a new conspiracy that largely follows the opportunistic paradigm, the notion that the 2020 election was stolen from Trump due to massive coordinated voter fraud. Persistently, surveys show that about 70% of Republicans feel that Biden was not legitimately elected. This is still a minority of Americans, about 30% total, but it represents a substantial political movement. The reasons for the popularity of this conspiracy theory are complex and debated, including a general rise in conspiracy claims surrounding elections (on both sides), the closeness of the election, the fact of the “red mirage” that was later wiped away, and of course the fact that Trump himself has been vehemently promoting the “big lie”.
I would note, however, that belief in conspiracies itself is not increasing over time. A recent study shows that conspiracy belief is essentially flat over long periods of time. The stolen election is a blip, an anomaly caused by the factors I listed above. I also note that while doubt in election results has been increasing over the last two decades, the 2020 stolen election belief is of an entirely different order of magnitude. This is not just some whining on the fringe – this is now a core political movement.
Continue Reading »
Sep
09
2022
Neanderthals (Homo neanderthalensis) is the closest evolutionary cousin to modern humans (Homo sapiens). In fact they are so close there has been some debate about whether or not they are truly a separate species from humans or if they are a subspecies (Homo sapiens neanderthalensis), but it seems the consensus has moved toward the former recently. They are not our ancestors – humans did not evolve from Neanderthals (anymore than we evolved from modern Chimps). Rather, we share a common ancestor with Neanderthals, about 700,000 years ago.
Neanderthals dominated in Europe from about 400,000 to 40,000 years ago, with their close relatives, the Denisovans, in Asia. They existed alongside modern humans for a long time, but then disappeared. There is probably no single simple reason why this occurred. There were likely many factors – some competition, some interbreeding, and independent reasons for Neanderthal decline that perhaps had nothing to do with humans. But as part of this question is the distinct but related one of – are modern humans somehow inherently superior to Neanderthals? Did we outcompete them because we were better?
This is a difficult question to answer from fossil evidence alone. Neanderthals were more robust than humans, and had brains which were as large (for body weight, meaning they were actually a bit bigger). Perhaps the replacement of Neanderthals by humans was a lateral move. Or perhaps Neanderthals were better adapted to the European ice age, and modern humans had the edge in warmer climates.
But there is a more direct question than ultimate evolutionary forces – were modern humans smarter than Neanderthals? To answer this question we can use biological evidence or cultural evidence. I will get to the biological evidence second, discussing a recent study that may shed significant light on the question. But first let’s look at the cultural evidence.
Continue Reading »
Sep
08
2022
As a SciFi fan, I have a lot of pet peeves (as most hard core fans probably do). While I love the freedom and imagination of speculative fiction, it’s easy to fall into common tropes that have emerged to facilitate story telling or simply due to lack of imagination. The problem is worse with science fiction in film and TV because of budgetary concerns (although CG is making this less of a restraint). For example, aliens tend to be far too human and generally have a monolithic culture.
Alien worlds also are too frequently Earth-like (mostly because the filming takes place on Earth). It’s one thing if your starship is visiting a world because it is habitable, but often our heroes come upon an apparently random planet that is unrealistic Earth-like. There are many exceptions to this in science fiction film and literature, but it still happens frequently, enough to be considered a trope. Think about all the ways in which the environment of even an Earth-sized planet in its habitable zone could vary. There’s always not only oxygen (even on apparently barren worlds) but enough oxygen. The gravity is always about 1G, the sun is always a nice yellow sun, and while the temperature may vary it’s always within survivable range.
The reality is that, by chance alone, something would be off. Now that we have the ability to actually discover exoplanets, that is exactly what we are finding. Astronomers estimate that there are likely between 300 million and 6 billion Earth-like planets in the milky way. That’s a big number, but there are 100-400 billion stars in the Milky Way, so that means on average about 1% of star systems contain an Earth-like planet. Also, what do we consider “Earth-like”? Generally that is any planet that is rocky and is in its star’s habitable zone, which means there can be sustainable liquid water on the surface. But that allows for a great deal of variability.
Continue Reading »
Sep
06
2022

As you are likely aware, NASA’s latest big project is the space launch system (SLS) which is the rocket system that will be used by the Artemis program to return astronauts to the Moon. The SLS also contains the Orion capsule, which is a deep space craft capable of holding four crew for missions up to 21 days. It is currently the only deep space capsule, capable of the high speed reentry required for return from the Moon.
Artemis I, and uncrewed test mission, was scheduled to launch on Monday August 29th. This launch had to be scrubbed because of the main engines were not at the right operating temperature. The problem turned out to be a faulty sensor. However, there are limited launch windows (only a couple of hours) and the problem could not be identified and fixed within the launch window, so the launch was scrubbed. It was then rescheduled for Saturday September 3rd. This time the problem was a real leak in the hydrogen fuel tanks, likely a problem with one of the seals. They failed to fix the problem on the launch pad so again had to scrub the launch. Leaking hydrogen is a serious problem; beyond a certain point there is a risk of the leaked hydrogen exploding on launch, and they were well beyond that safety point.
This shows how delicate this whole process is. It may be possible to fix the seal and the leak with the rocket still on the launch pad. However, the batteries used for the abort system are getting to the end of their optimal readiness window, and those batteries have to be swapped out in the engineering building. So the rocket has to be taken off the pad and brought there to reset everything to be ready for launch. This puts the next earliest launch date about six weeks off, in mid October.
Are these launch delays routine and expected or are they evidence that the SLS is a boondoggle, as its harshest critics maintain? I think it’s a little of both.
Continue Reading »
Sep
02
2022
Why do societies collapse? This is an interesting question, and as you might imagine the answer is complex. There are multiple internal and external reasons, but a core features seems to be that a combination of factors were simultaneously at work – a crisis that the society failed to deal with adequately because of dysfunctional institutions and political infrastructure. All societies face challenges, but successful ones solve them, or at least make significant adjustments. There are also multiple ways to define “collapse”, which does not have to involve complete extinction. We can also add political or institutional collapse, where, for example, a thriving democracy collapses into a dictatorship.
There are many people concerned that America is facing a real threat that could collapse our democracy. The question is – do we have the institutional vigor to make the appropriate adjustments to survive these challenges? Sometimes, by the time you recognize a serious threat it’s too late. At other times, the true causes of the threat are not recognized (at least not by a majority) and therefore the solutions are also missed. So the question is, to the extent that American democracy is under threat, what are the true underlying causes?
This is obviously a complex question that I am not going to be able to adequately address in one blog post. I would like to suggest, however, that social media algorithms are at least one factor contributing to the destabilizing of democracy. It would be ironic if one of the greatest democracies in world history were brought down in part by YouTube algorithms. But this is not implausible.
Continue Reading »