Dec
09
2024
As I predicted the controversy over whether or not we have achieved general AI will likely exist for a long time before there is a consensus that we have. The latest round of this controversy comes from Vahid Kazemi from OpenAI. He posted on X:
“In my opinion we have already achieved AGI and it’s even more clear with O1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”. Some say LLMs only know how to follow a recipe. Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples.”
I will set aside the possibility that this is all for publicity of OpenAI’s newest O1 platform. Taken at face value – what is the claim being made here? I actually am not sure (part of the problem of short form venues like X). In order to say whether or not OpenAI O1 platform qualified as an artificial general intelligence (AGI) we need to operationally define what an AGI is. Right away, we get deep into the weeds, but here is a basic definition: “Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.”
That may seem straightforward, but it is highly problematic for many reasons. Scientific American has a good discussion of the issues here. But at it’s core two features pop up regularly in various definitions of general AI – the AI has to have wide-ranging abilities, and it has to equal or surpass human level cognitive function. There is a discussion about whether or not how the AI achieves its ends matters or should matter. Does it matter if the AI is truly thinking or understanding? Does it matter if the AI is self-aware or sentient? Does the output have to represent true originality or creativity?
Continue Reading »
Dec
05
2024
What is Power-to-X (PtX)? It’s just a fancy marketing term for green hydrogen – using green energy, like wind, solar, nuclear, or hydroelectric, to make hydrogen from water. This process does not release any CO2, just oxygen, and when the hydrogen is burned back with that oxygen it creates only water as a byproduct. Essentially hydrogen is being used as an energy storage medium. This whole process does not create energy, it uses energy. The wind and solar etc. are what create the energy. The “X” refers to all the potential applications of hydrogen, from fuel to fertilizer. Part of the idea is that intermittent energy production can be tied to hydrogen production, so when there is excess energy available it can be used to make hydrogen.
A recent paper explores the question of why, despite all the hype surrounding PtX, there is little industry investment. Right now only 0.1% of the world’s hydrogen production is green. Most of the rest comes from fossil fuel (gray and brown hydrogen) and in many cases is actually worse than just burning the fossil fuel. Before I get into the paper, let’s review what hydrogen is currently used for. Hydrogen is essentially a high energy molecule and it can be used to drive a lot of reactions. It is mostly used in industry – making fertilizer, reducing the sulfur content of gas, producing industrial chemicals, and making biofuel. It can also be used for hydrogen fuel cells cars, which I think is a wasted application as BEVs are a better technology and any green hydrogen we do make has better uses. There are also emerging applications, like using hydrogen to refine iron ore, displacing the use of fossil fuels.
A cheap abundant source of green hydrogen would be a massive boost to multiple industries and would also be a key component to achieving net zero carbon emissions. So where is all the investment? This is the question the paper explores.
Continue Reading »
Dec
03
2024
Astrophysicists come up with a lot of whacky ideas, some of which actually turn out to be possibly true (like the Big Bang, black holes, accelerating cosmic expansion, dark matter). Of course, all of these conclusions are provisional, but some are now backed by compelling evidence. Evidence is the real key – often the challenge is figuring out a way to find evidence that can potentially support or refute some hypothesis about the cosmos. Sometimes it’s challenging to figure out even theoretically (let alone practically) how we might prove or disprove a hypothesis. Decades may go buy before we have the ability to run relevant experiments or make the kinds of observations necessary.
Black holes fell into that category. They were predicted by physics long before we could find evidence of their existence. There is a category of black hole, however, that we still have not confirmed through any observation – primordial black holes (PBH). As the name implies, these black holes may have been formed in the early universe, even before the first stars. In the early dense universe, fluctuations in the density of space could have lead to the formation of black holes. These black holes could theoretically be of any size, since they are not dependent on a massive star collapsing to form them. This process could lead to black holes smaller than the smaller stellar remnant black hole.
In fact, it is possible that there are enough small primordial black holes out there to account for the missing dark matter – matter we can detect through its gravitational effects but that we cannot otherwise see (hence dark). PBHs are considered a dark matter candidate, but the evidence for this so far is not encouraging. For example, we might be able to detect black holes through microlensing. If a black hole happens to pass in front of a more distant star (from the perspective of an observer on Earth), then gravitational lensing will cause that star to appear to brighten, until the black hole passes. However, microlensing surveys have not found the number of microlensing events that would be necessary for PBHs to explain dark matter. Dark matter makes up 85% of the matter in the universe, so there would have to be lots of PBHs to be the sole cause of dark matter. It’s still possible that longer observation times would detect larger black holes (brightening events can take years if the black holes are large). But so far there is a negative result.
Continue Reading »
Dec
02
2024
Climate change is a challenging issue on multiple levels – it’s challenging for scientists to understand all of the complexities of a changing climate, it’s difficult to know how to optimally communicate to the public about climate change, and of course we face an enormous challenge in figuring out how best to mitigate climate change. The situation is made significantly more difficult by the presence of a well-funded campaign of disinformation aimed at sowing doubt and confusion about the issue.
I recently interviewed climate scientist Michael Mann about some of these issues and he confirmed one trend that I had noticed, that the climate change denier rhetoric has, to some extent, shifted to what he called “doomism”. I have written previously about some of the strategies of climate change denial, specifically the motte and bailey approach. This approach refers to a range of positions, all of which lead to the same conclusion – that we should essentially do nothing to mitigate climate change. We should continue to burn fossil fuels and not worry about the consequences. However, the exact position shifts based upon current circumstances. You can deny that climate change is even happening, when you have evidence or an argument that seems to support this position. But when that position is not rhetorically tenable, you can back off to more easily defended positions, that while climate change may be happening, we don’t know the causes and it may just be a natural trend. When that position fails, then you can fall back to the notion that climate change may not be a bad thing. And then, even if forced to admit that climate change is happening, it is largely anthropogenic, and it will have largely negative consequences, there isn’t anything we can do about it anyway.
This is where doomism comes in. It is a way of turning calls for climate action against themselves. Advocates for taking steps to mitigate climate change often emphasize how dire the situation is. The climate is already showing dangerous signs of warming, the world is doing too little to change course, the task at hand is enormous, and time is running out. That’s right, say the doomists, in fact it’s already too late and we will never muster the political will to do anything significant, so why bother trying. Again, the answer is – do nothing.
Continue Reading »
Nov
26
2024
The world of science communication has changed dramatically over the last two decades, and it’s useful to think about those changes, both for people who generate and consume science communication. The big change, of course, is social media, which has disrupted journalism and communication in general.
Prior to this disruption the dominant model was that most science communication was done by science journalists backed up by science editors. Thrown into the mix was the occasional scientist who crossed over into public communication, people like Carl Sagan. Science journalists generally were not scientists, but would have a range of science backgrounds. The number one rule for such science journalists is to communicate the consensus of expert opinion, not substitute their own opinion.
Science journalists are essentially a bridge between scientists and the public. They understand enough about science, and should have a fairly high degree of science literacy, that they can communicate directly with scientists and understand what they have to say. They then repackage that communication for the general public.
Continue Reading »
Nov
22
2024
It’s been a while since I discussed artificial intelligence (AI) generated art here. What I have said in the past is that AI art appears a bit soulless and there are details it has difficulty creating without bizarre distortions (hands are particularly difficult). But I also predicted that it would get better fast. So how is it doing? In brief – it’s getting better fast.
I was recently sent a link to this site which tests people on their ability to tell the difference between AI and human-generated art. Unfortunately the site is no longer taking submissions, but you can view a discussion of the results here. These pictures were highly selected, so they are not representative. These are not random choices. So any AI pictures with obvious errors were excluded. People were 60% accurate in determining which art was AI and which was human, which is only slightly better than chance. Also, the most liked picture (the one above the fold here) in the line-up was AI generated. People had the hardest time with the impressionist style, which makes sense.
Again – these were selected pictures. So I can think of three reasons that it may be getting harder to tell the difference between AI and humans in these kinds of tests other than improvements in the AI themselves. First, people may be getting better at using AI as a tool for generating art. This would yield better results, even without any changes in the AI. Second, as more and more people use AI to generate art there are more examples out there, so it is easier to pick the cream of the crop which are very difficult to tell from human art. This includes picking images without obvious tells, but also just picking ones that don’t feel like AI art. We now are familiar with AI art, having seen so many examples, and that familiarity can be used to subvert expectations by picking examples of AI art that are atypical. Finally, people are figuring out what AI does well and what it does not-so-well. As mentioned, AI is really good at some genres, like impressionism. This can also just fall under – getting better at using AI art – but I thought it was distinct enough for its own mention.
Continue Reading »
Nov
19
2024
Humans (assuming you all experience roughly what I experience, which is a reasonable assumption) have a sense of self. This sense has several components – we feel as if we occupy our physical bodies, that our bodies are distinct entities separate from the rest of the universe, that we own our body parts, and that we have the agency to control our bodies. We can do stuff and affect the world around us. We also have a sense that we exist in time, that there is a continuity to our existence, that we existed yesterday and will likely exist tomorrow.
This may all seem too basic to bother pointing out, but it isn’t. These aspects of a sense of self also do not flow automatically from the fact of our own existence. There are circuits in the brain receiving input from sensory and cognitive information that generate these senses. We know this primarily from studying people in whom one or more of these circuits are disrupted, either temporarily or permanently. This is why people can have an “out of body” experience – disrupt those circuits which make us feel embodied. People can feel as if they do not own or control a body part (such as so-called alien hand syndrome). Or they can feel as if they own and control a body part that doesn’t exist. It’s possible for there to be a disconnect between physical reality and our subjective experience, because the subjective experience of self, of reality, and of time are constructed by our brains based upon sensory and other inputs.
Perhaps, however, there is another way to study the phenomenon of a sense of self. Rather than studying people who are missing one or more aspects of a sense of self, we can try to build up that sense, one component at a time, in robots. This is the subject of a paper by three researchers, a cognitive roboticist, a cognitive psychologist who works with robot-human interactions, and a psychiatrist. They explore how we can study the components of a sense of self in robots, and how we can use robots to do psychological research about human cognition and the self of self.
Continue Reading »
Nov
18
2024
It’s interesting that there isn’t much discussion about this in the mainstream media, but the Biden administration recently pledged to triple US nuclear power capacity by 2050. At COP28 last year the US was among 25 signatories who also pledged to triple world nuclear power capacity by 2050. Last month the Biden administration announced $900 million to support startups of Gen III+ nuclear reactors in the US. This is on top of the nuclear subsidies in the IRA. Earlier this year they announced the creation of the Nuclear Power Project Management and Delivery working group to help streamline the nuclear industry and reduce cost overruns. In July Biden signed the bipartisan ADVANCE act which has sweeping support for the nuclear industry and streamlining of regulations.
What is most encouraging is that all this pro-nuclear action has bipartisan support. In Trump’s first term he was “broadly supportive” of nuclear power, and took some small initial steps. His campaign has again signaled support for “all forms of energy” and there is no reason to suspect that he will undo any of the recent positive steps.
Continue Reading »
Nov
15
2024
The world produces 350-400 million metric tons of plastic waste. Less than 10% of this waste is recycled, while 25% is mismanaged or littered. About 1.7 million tons ends up in the ocean. This is not sustainable, but whose responsibility is it to deal with this issue?
The debate about responsibility is often framed as personal responsibility vs systemic (at the government policy level). Industry famously likes to emphasize personal responsibility, as a transparent way to shield themselves from regulations. The Keep American Beautiful campaign (the crying Indian one) was actually an industry group using an anti-littering campaign to shift the focus away from the companies producing the litter to the consumer. It worked.
This is not to say we do not all have individual responsibility to be good citizens. There are hundreds of things adults should or should not do to care for their own health, the environment, the people around them, and their fellow citizens. But a century of research shows a very strong and consistent signal – campaigns to influence mass public behavior have limited efficacy. Getting most people to remember and act upon best behavior consistently is difficult. This likely reflects the fact that it is difficult for individuals to remember and act upon best behavior consistently – it’s cognitively demanding. As a general rule we tend to avoid cognitively demanding behavior and follow pathways of least resistance. We likely evolved an inherent laziness as a way of conserving energy and resources, which can make it challenging for us to navigate the complex massive technological society we have constructed for ourselves.
There is a general consensus among researchers who study such things that there are better ways to influence public behavior than shaming or guilting people. We have to change the culture. People will follow the crowd and social norms, so we have to essentially create ever-present peer pressure to do the right thing. While this approach is more effective than shaming, it is still remarkably ineffective overall. Influencing public behavior by 20%, say, is considered a massive win. What works best is to make the optimal behavior the pathway of least resistance. It has to be the default, the easiest option, or perhaps the only option.
Continue Reading »
Nov
12
2024
On September 11, 2001, as part of a planned terrorist attack, commercial planes were hijacked and flown into each of the two towers at the World Trade Center in New York. A third plane was flown into the Pentagon, and a fourth crashed after the passengers fought back. This, of course, was a huge world-affecting event. It is predictable that after such events, conspiracy theorists will come out of the woodwork and begin their anomaly hunting, breathing in the chaos that inevitably follows such events and spinning their sinister tales, largely out of their warped imagination. It is also not surprising that the theories that result, just like any pseudoscience, never truly die. They may fade to the fringe, but will not go away completely, waiting for a new generation to bamboozle. In the age of social media, everything also has a second and third life as a You Tube or Tik Tok video.
But still I found it interesting, after not hearing 911 conspiracy theories for years, to get an e-mail out of the blue promoting the same-old 911 conspiracy that the WTC towers fell due to planned demolition, not the impact of the commercial jets. The e-mail pointed to this recent video, by dedicated conspiracy theorist Jonathan Cole. The video has absolutely nothing new to say, but just recycles the same debunked line of argument.
The main idea is that experts and engineers cannot fully explain the sequence of events that led to the collapse of the towers and also explain exactly how the towers fell as they did. To do this Cole uses the standard conspiracy theory playbook – look for anomalies and then insert your preferred conspiracy theory into the apparent gap in knowledge that you open up. The unstated major premise of this argument is that experts should be able to explain, to an arbitrary level of detail, exactly how a complex, unique, and one-off event unfolded – and they should be able to do this from whatever evidence happens to be available.
Continue Reading »