Dec
09
2024
As I predicted the controversy over whether or not we have achieved general AI will likely exist for a long time before there is a consensus that we have. The latest round of this controversy comes from Vahid Kazemi from OpenAI. He posted on X:
“In my opinion we have already achieved AGI and it’s even more clear with O1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”. Some say LLMs only know how to follow a recipe. Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples.”
I will set aside the possibility that this is all for publicity of OpenAI’s newest O1 platform. Taken at face value – what is the claim being made here? I actually am not sure (part of the problem of short form venues like X). In order to say whether or not OpenAI O1 platform qualified as an artificial general intelligence (AGI) we need to operationally define what an AGI is. Right away, we get deep into the weeds, but here is a basic definition: “Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.”
That may seem straightforward, but it is highly problematic for many reasons. Scientific American has a good discussion of the issues here. But at it’s core two features pop up regularly in various definitions of general AI – the AI has to have wide-ranging abilities, and it has to equal or surpass human level cognitive function. There is a discussion about whether or not how the AI achieves its ends matters or should matter. Does it matter if the AI is truly thinking or understanding? Does it matter if the AI is self-aware or sentient? Does the output have to represent true originality or creativity?
Continue Reading »
Dec
05
2024
What is Power-to-X (PtX)? It’s just a fancy marketing term for green hydrogen – using green energy, like wind, solar, nuclear, or hydroelectric, to make hydrogen from water. This process does not release any CO2, just oxygen, and when the hydrogen is burned back with that oxygen it creates only water as a byproduct. Essentially hydrogen is being used as an energy storage medium. This whole process does not create energy, it uses energy. The wind and solar etc. are what create the energy. The “X” refers to all the potential applications of hydrogen, from fuel to fertilizer. Part of the idea is that intermittent energy production can be tied to hydrogen production, so when there is excess energy available it can be used to make hydrogen.
A recent paper explores the question of why, despite all the hype surrounding PtX, there is little industry investment. Right now only 0.1% of the world’s hydrogen production is green. Most of the rest comes from fossil fuel (gray and brown hydrogen) and in many cases is actually worse than just burning the fossil fuel. Before I get into the paper, let’s review what hydrogen is currently used for. Hydrogen is essentially a high energy molecule and it can be used to drive a lot of reactions. It is mostly used in industry – making fertilizer, reducing the sulfur content of gas, producing industrial chemicals, and making biofuel. It can also be used for hydrogen fuel cells cars, which I think is a wasted application as BEVs are a better technology and any green hydrogen we do make has better uses. There are also emerging applications, like using hydrogen to refine iron ore, displacing the use of fossil fuels.
A cheap abundant source of green hydrogen would be a massive boost to multiple industries and would also be a key component to achieving net zero carbon emissions. So where is all the investment? This is the question the paper explores.
Continue Reading »
Dec
02
2024
Climate change is a challenging issue on multiple levels – it’s challenging for scientists to understand all of the complexities of a changing climate, it’s difficult to know how to optimally communicate to the public about climate change, and of course we face an enormous challenge in figuring out how best to mitigate climate change. The situation is made significantly more difficult by the presence of a well-funded campaign of disinformation aimed at sowing doubt and confusion about the issue.
I recently interviewed climate scientist Michael Mann about some of these issues and he confirmed one trend that I had noticed, that the climate change denier rhetoric has, to some extent, shifted to what he called “doomism”. I have written previously about some of the strategies of climate change denial, specifically the motte and bailey approach. This approach refers to a range of positions, all of which lead to the same conclusion – that we should essentially do nothing to mitigate climate change. We should continue to burn fossil fuels and not worry about the consequences. However, the exact position shifts based upon current circumstances. You can deny that climate change is even happening, when you have evidence or an argument that seems to support this position. But when that position is not rhetorically tenable, you can back off to more easily defended positions, that while climate change may be happening, we don’t know the causes and it may just be a natural trend. When that position fails, then you can fall back to the notion that climate change may not be a bad thing. And then, even if forced to admit that climate change is happening, it is largely anthropogenic, and it will have largely negative consequences, there isn’t anything we can do about it anyway.
This is where doomism comes in. It is a way of turning calls for climate action against themselves. Advocates for taking steps to mitigate climate change often emphasize how dire the situation is. The climate is already showing dangerous signs of warming, the world is doing too little to change course, the task at hand is enormous, and time is running out. That’s right, say the doomists, in fact it’s already too late and we will never muster the political will to do anything significant, so why bother trying. Again, the answer is – do nothing.
Continue Reading »
Nov
22
2024
It’s been a while since I discussed artificial intelligence (AI) generated art here. What I have said in the past is that AI art appears a bit soulless and there are details it has difficulty creating without bizarre distortions (hands are particularly difficult). But I also predicted that it would get better fast. So how is it doing? In brief – it’s getting better fast.
I was recently sent a link to this site which tests people on their ability to tell the difference between AI and human-generated art. Unfortunately the site is no longer taking submissions, but you can view a discussion of the results here. These pictures were highly selected, so they are not representative. These are not random choices. So any AI pictures with obvious errors were excluded. People were 60% accurate in determining which art was AI and which was human, which is only slightly better than chance. Also, the most liked picture (the one above the fold here) in the line-up was AI generated. People had the hardest time with the impressionist style, which makes sense.
Again – these were selected pictures. So I can think of three reasons that it may be getting harder to tell the difference between AI and humans in these kinds of tests other than improvements in the AI themselves. First, people may be getting better at using AI as a tool for generating art. This would yield better results, even without any changes in the AI. Second, as more and more people use AI to generate art there are more examples out there, so it is easier to pick the cream of the crop which are very difficult to tell from human art. This includes picking images without obvious tells, but also just picking ones that don’t feel like AI art. We now are familiar with AI art, having seen so many examples, and that familiarity can be used to subvert expectations by picking examples of AI art that are atypical. Finally, people are figuring out what AI does well and what it does not-so-well. As mentioned, AI is really good at some genres, like impressionism. This can also just fall under – getting better at using AI art – but I thought it was distinct enough for its own mention.
Continue Reading »
Nov
19
2024
Humans (assuming you all experience roughly what I experience, which is a reasonable assumption) have a sense of self. This sense has several components – we feel as if we occupy our physical bodies, that our bodies are distinct entities separate from the rest of the universe, that we own our body parts, and that we have the agency to control our bodies. We can do stuff and affect the world around us. We also have a sense that we exist in time, that there is a continuity to our existence, that we existed yesterday and will likely exist tomorrow.
This may all seem too basic to bother pointing out, but it isn’t. These aspects of a sense of self also do not flow automatically from the fact of our own existence. There are circuits in the brain receiving input from sensory and cognitive information that generate these senses. We know this primarily from studying people in whom one or more of these circuits are disrupted, either temporarily or permanently. This is why people can have an “out of body” experience – disrupt those circuits which make us feel embodied. People can feel as if they do not own or control a body part (such as so-called alien hand syndrome). Or they can feel as if they own and control a body part that doesn’t exist. It’s possible for there to be a disconnect between physical reality and our subjective experience, because the subjective experience of self, of reality, and of time are constructed by our brains based upon sensory and other inputs.
Perhaps, however, there is another way to study the phenomenon of a sense of self. Rather than studying people who are missing one or more aspects of a sense of self, we can try to build up that sense, one component at a time, in robots. This is the subject of a paper by three researchers, a cognitive roboticist, a cognitive psychologist who works with robot-human interactions, and a psychiatrist. They explore how we can study the components of a sense of self in robots, and how we can use robots to do psychological research about human cognition and the self of self.
Continue Reading »
Nov
18
2024
It’s interesting that there isn’t much discussion about this in the mainstream media, but the Biden administration recently pledged to triple US nuclear power capacity by 2050. At COP28 last year the US was among 25 signatories who also pledged to triple world nuclear power capacity by 2050. Last month the Biden administration announced $900 million to support startups of Gen III+ nuclear reactors in the US. This is on top of the nuclear subsidies in the IRA. Earlier this year they announced the creation of the Nuclear Power Project Management and Delivery working group to help streamline the nuclear industry and reduce cost overruns. In July Biden signed the bipartisan ADVANCE act which has sweeping support for the nuclear industry and streamlining of regulations.
What is most encouraging is that all this pro-nuclear action has bipartisan support. In Trump’s first term he was “broadly supportive” of nuclear power, and took some small initial steps. His campaign has again signaled support for “all forms of energy” and there is no reason to suspect that he will undo any of the recent positive steps.
Continue Reading »
Nov
08
2024
Australia is planning a total ban on social media for children under 16 years old. Prime Minister Anthony Albanese argues that it is the only way to protect vulnerable children from the demonstrable harm that social media can do. This has sparked another round of debates about what to do, if anything, about social media.
When social media first appeared, there wasn’t much discussion or recognition about the potential downsides. Many viewed it as one way to fulfill the promise of the web – to connect people digitally. It was also viewed as the democratization of mass communication. Now anyone could start a blog, for example, and participate in public discourse without having to go through editors and gatekeepers or invest a lot of capital. And all of this was true. Here I am, two decades later, using my personal blog to do just that.
But the downsides also quickly became apparent. Bypassing gatekeepers also means that the primary mechanism for quality control (for what it was worth) was also gone. There are no journalistic standards on social media, no editorial policy, and no one can get fired for lying, spreading misinformation, or making stuff up. While legacy media still exists, social media caused a realignment in how most people access information.
In the social media world we have inadvertently created, the people with the most power are arguably the tech giants. This has consolidated a lot of power in the hands of a few billionaires with little oversight or regulations. Their primary tool for controlling the flow of information is computer algorithms, which are designed to maximize engagement. You need to get people to click and to stay on your website so that you can feed them ads. This also created a new paradigm in which the user (that’s you) is the product – apps and websites are used to gather information about users which are then sold to other corporations, largely for marketing purposes. In some cases, like the X platform, and individual can favor their own content and perspective, essentially turning a platform into a propaganda machine. Sometimes an authoritarian government controls the platform, and can push public discourse in whatever direction they want.
Continue Reading »
Nov
04
2024
I was away last week, first at CSICON and then at a conference in Dubai. I was invited to give a 9 hour seminar on scientific skepticism for the Dubai Future Foundation. That sounds like a lot of time, but it isn’t. It was a good reminder of the vast body of knowledge that is relevant to skepticism, from neuroscience to psychology and philosophy. Just the study of pseudoscience and conspiracy thinking themselves could have filled the time. It was my first time visiting the Middle East and I always find it fascinating to see the differences and similarities between cultures.
What does all this have to do with alternating vs direct current? Nothing, really, except that I found myself in a conversation about the topic with someone deeply involved in the power industry in the UAE. My class was an eclectic and international group of business people – all very smart and accomplished, but also mostly entirely new to the concept of scientific skepticism and without a formal science background. It was a great opportunity to gauge my American perspective against an international group.
I was struck, among other things, by how similar it was. I could have been talking to a similar crowd in the US. Sure, there was a layer of Arabic and Muslim culture on top, but otherwise the thinking and attitudes felt very familiar. Likely this is a result of the fact that Dubai is a wealthy international city. It is a good reminder that the urban-rural divide may be the most deterministic one in the world, and if you get urban and wealthy enough you tend to align with global culture.
Back to my conversation with the power industry exec – the power mix in the UAE is not very different from the US. They have about 20% nuclear (same as the US), 8% solar, and the rest fossil fuel, mostly natural gas. They have almost no wind and no hydropower. Their strategy to shift to low carbon power is all in on solar. They are rapidly increasing their power demand, and solar is the cheapest new energy. I don’t think their plan for the future is aggressive enough, but they are moving in the right direction.
Continue Reading »
Oct
22
2024
I am always sniffing around (pun intended) for new and interesting technology, especially anything that I think is currently flying under the radar of public awareness but has the potential to transform our world in some way. I think electronic nose technology fits into this category.
The idea is to use electronic sensors that can detect chemicals, specifically those that are abundant in the air, such as volatile organic compounds (VOCs). Such technology has many potential uses, which I will get to below. The current state of the art is advancing quickly with the introduction of various nanomaterials, but at present these sensing arrays require multiple antenna coated with different materials. As a result they are difficult and expensive to manufacture and energy intensive to operate. They work, and often are able to detect specific VOCs with 95% or greater accuracy. But their utility is limited by cost and inconvenience.
A new advance, however, is able to reproduce and even improve upon current performance with a single antenna and single coating. The technology uses a single graphene oxide coated antenna which then uses ultrawide microwave band signals to detect specific VOCs. These molecules will reflect different wavelengths differently depending on their chemical structure. That is how they “sniff” the air. The results are impressive.
Continue Reading »
Oct
21
2024
At a recent event Tesla showcased the capabilities of its humanoid autonomous robot, Optimus. The demonstration has come under some criticism, however, for not being fully transparent about the nature of the demonstration. We interviewed robotics expert, Christian Hubicki, on the SGU this week to discuss the details. Here are some of the points I found most interesting.
First, let’s deal with the controversy – to what extent were the robots autonomous, and how transparent was this to the crowd? The first question is easier to answer. There are basically three types of robot control, pre-programmed, autonomous, and teleoperated. Pre-programmed means they are following a predetermined set of instructions. Often if you see a robot dancing, for example, that is a pre-programmed routine. Autonomous means the robot has internal real-time control. Teleoperated means that a human in a motion-capture suit is controlling the movement of the robots. All three of these types of control have their utility.
These are humanoid robots, and they were able to walk on their own. Robot walking has to be autonomous or pre-programmed, it cannot be teleoperated. This is because balance requires real-time feedback of position and other information to produces the moment-to-moment adjustments that maintain balance. A tele-operator would not have this (at least not with current technology). The Optimus robots walked out, so this was autonomous.
Continue Reading »