Archive for April, 2018

Apr 30 2018

Keeping Brains Alive Outside the Body

Published by under Neuroscience

Researchers at Yale report (at a meeting – not yet published) that they were able to keep pig brains alive for up to 36 hours after the pigs were decapitated. They acquired the pig head from a slaughterhouse, and experimented on them about 4 hours after death. This research is a long way from an alive “brain in a jar” but it does raise some early ethical questions.

First the technical stuff, with the caveat that the study is not yet published in the peer-reviewed literature so some details are sparse. We know the researchers experimented on pig heads. The report does not say explicitly whether the brains were completely removed from the skulls or not, but they did have access to the brain itself so it was at least exposed if not removed. They attached a series of pumps to the blood vessels and pumped oxygenated blood through them. They also used drugs to prevent the brains from swelling, and the researchers say these drugs would also prevent some brain cell activity (they are channel blockers).

They used brain-surface EEG to record electrical brain activity and – there was none. The pig brains were flat-line. But when they later dissected the brain tissue there was cellular activity for up to 36 hours.

This is clearly a baby step in the direction of maintaining a living brain outside of a body. Four hours after death is a long time, and there would certainly already be a lot of cellular death by that point. If the goal (and this wasn’t their goal) is to maintain a fully functional extracorporeal brain, then it would need to be hooked up to external blood flow within minutes of death, not hours. You can’t just get pig heads from the slaughterhouse.

But there is no theoretical reason why this would not work. If the brains were kept oxygenated throughout the process, and they were hooked up to an external system that fed oxygenated blood with managed CO2 levels and a supply of glucose (basically normal arterial blood), there is no reason why the cells could not survive for a long time. There are likely to be many technical hurdles here, but as a thought experiment it seems plausible.

Continue Reading »

No responses yet

Apr 27 2018

Prioritizing Sustainability in Research

Published by under Technology

According to the OECD the world spends about 1.15 trillion dollars a year on research and development. The US spends the most at 0.46 trillion, with China second at 0.41 (but rapidly catching up and likely to exceed the US soon). That is about 2.3% of worldwide GDP. That includes all sources, public and private.

From any perspective, that is a lot of money. It is also a good thing – the world is investing a significant amount of its activity and resources in the future. I don’t know what the optimal percentage for such investment is, and I am sure someone could make a reasonable argument that it should be higher. What I want to discuss here, however, is not how much we invest but how we invest in research. In broad brushstrokes – what should be our research priorities?

If you think about it, where we invest in research essentially determines the path forward that our civilization takes in terms of science and technology. So it’s worth thinking about how that quadrillion plus dollars gets spent.  Right now we have a decentralized R&D infrastructure, with many different facets. There is a lot of “bottom up” research, meaning that individual researchers, companies, labs, and other institutions are determining for themselves what to research based upon their own priorities. There is also some “top down” research in which large funders, mostly governments, determine research priorities through their granting process. There are strengths and weaknesses to a diffuse system like this, but I think overall it’s pretty good. Essentially free-market forces are at work with some nudges here and there.

Private researchers are largely going to prioritize R&D that makes them and their investors money. That’s fine, as this also produces incentives to make things faster, better, cheaper. All governments can do in this situation is look for perverse incentives and mitigate them through regulations. For example, I strongly believe we should not allow industry to simply externalize the costs of their profit-making enterprises. That amounts to a hidden subsidy, and also creates a perverse incentive to externalize costs (for example, by dumping into the environment).

Continue Reading »

No responses yet

Apr 26 2018

The 3D Printing Revolution

Published by under Technology

We are in an interesting phase of developing 3D printing technology – the ability to print real objects in three dimensions. The technology clearly works and has applications. The question is – will 3D printing remain a niche technology, or will it revolutionize manufacturing, how consumers obtain certain items, and even introduce new possibilities in medicine and wearable tech?

It’s easy to get carried away with the possibilities, and I think they are all plausible. But often we confuse the mere ability to do something with the cost-effectiveness and practicality of doing so. We may be technologically capable of 3D printing certain consumer goods, but it may just be cheaper and easier to use more traditional methods of manufacturing. We always have to see how technology works out in the real world to see if it will truly be the transformative tech proponents promise, or if it will go the way of the Segue.

There is a steady stream of advances in 3D printing technology, which makes me think we are a long way from seeing the full potential of this technology. Lets take a look at a couple of recent ones.

Thermorph Printing

Researchers at Carnegie Mellon University have demonstrated the ability to print flat plastic objects that, when heated, will fold into a predetermined three-dimensional shape. The advantage to this approach is that it is cheaper and faster to print the flat pieces than the solid objects. Also, there are many contexts in which it is more practical to store and ship the flat objects.

Continue Reading »

No responses yet

Apr 24 2018

Ehrlich and the Collapse of Civilization

In 1968, 50 years ago, Paul Ehrlich and his wife published The Population Bomb, which famously predicted mass starvation by the end of the next decade. Ehrlich’s predictions failed largely because of the green revolution, the dramatic increase in agricultural productivity. You would think that being famous for a dramatically failed prediction would bring humility, but Ehrlich is still at it. In a recent interview he argues that the collapse of civilization is a “near certainty” within decades.

Let’s examine some of the logic at work here. First, just because Ehrlich was wrong before, that does not mean he is wrong now. It is certainly cause for skepticism about his current claims, because he may be laboring under the same false premises that drove his previous false predictions. We need to take a look at his claims and see if they hold water.

Ehrlich basically argues in the interview that he was mostly right 50 years ago. He may have gotten the details wrong, but his basic point that overpopulation and over consumption will eventually doom us is still valid. While this interpretation is transparently self-serving, he is not alone in this opinion. A 2015 opinion in the NYT also argued that Ehrlich was essentially right. Paul Murtaugh writes:

Ehrlich’s argument that expanding human populations cannot be sustained on an Earth with finite carrying capacity is irrefutable and, indeed, almost tautological. The only uncertainty concerns the timing and severity of the rebalancing that must inevitably occur.

Well, sure. If you reduce Ehrlich’s argument to – the Earth has finite resources, and so we cannot expand our population without limit, of course he is correct. That is a trivialism, without adding any real insight. The parts that Ehrlich did add were clearly wrong.

Continue Reading »

No responses yet

Apr 23 2018

What Were You Expecting?

Published by under Neuroscience

The art of magical illusion is partly exploiting people’s expectations. Our brains encode a model of how we expect the world to work. When we let go of something, it should fall to the ground. If it doesn’t, we are surprised. This also means that our behavior is predictable – we will tend to look for the object to fall to the ground, meaning the magician will know where we are going to look and can take advantage of that to do things out of our sight.

In our effort to better understand how the brain works, neuroscientists are looking at how the brain reacts to unexpected stimuli. This type of research can have a dual function – looking at the anatomical correlates of a mental phenomenon, and also validating the mental phenomenon itself (because it has an anatomical correlate). This also makes the research tricky, because the questions we ask (how we conceptualize mental phenomena) will dramatically affect the outcome.

For this reason no one study is ever going to tell a complete story. It can, at best, be one piece to a very large and complex puzzle.

The recent study, however, is looking at a fairly straightforward phenomenon – what happens in the brain when it is confronted with unexpected events? The researchers exposed subjects to visual and olfactory stimuli, pictures paired with specific odors. The odors were of food, and either sweet or savory. After being exposed to the pairs sufficiently to form a memory of the correlations, the researchers then showed the subjects a picture with the “wrong” odor. They did this while looking at their brain activity with fMRI scans. (Functional MRI scans look at blood flow in the brain, which is a good way to infer brain activity.)

What they found is that the midbrain became more active when the subject was confronted with a surprising stimuli. The midbrain is part of the brain stem, which is a very primitive part of the brain, shared with all vertebrates. For this reason it is sometimes referred to as our “lizard brain.” The brain stem is a relay center, and controls many basic functions, like breathing. The midbrain in particular is involved in relaying sensory information.

What neuroscientists have found over the years is that even the most primitive parts of the brain are highly involved in processing sensory information. Much of the basic processing of vision and sound occurs before the signals even get to the cortex. The midbrain specifically appears to be involved in filtering sensory input, and determining what we should pay attention to.

Continue Reading »

No responses yet

Apr 20 2018

Update on Mandatory GMO Labeling

Published by under Technology

A recent commentary in RealClear Science makes a simple but important point – it is difficult for the government to properly regulate what it does not understand. That observation can apply to many things, as we recently saw with the questioning of Facebook CEO Mark Zuckerberg. The display was quite embarrassing, leading to countless parodies of the aged congress critters asking clueless questions of the young tech giant. This called back to mind the infamous comment by Senator Ted Stevens who described the internet as a “series of tubes.” While serious, this was part of important testimony regarding net neutrality.

The broader issue here is – how can our elected leaders hope to regulate cutting edge science and technology that they don’t understand? This is not limited to internet technology, but also to things like genetic engineering, CRISPR, cloning, stem cells and other biotech. How about artificial intelligence and robotics, or issues related to our energy infrastructure and climate change?

More and more, scientific literacy is a critical virtue we should demand of our politicians. Yet questions about important scientific topics hardly rate during elections.

Just one such important topic is the regulation of genetically modified organisms – GMOs. In 2016 Vermont was the first state to pass a law requiring labeling of foods that contain GMOs. The prompted a federal law, signed by Obama, that supercedes the state law. The federal law also requires labeling, but is less strict, allowing for scannable codes or telephone numbers that consumers can call to get more information. The USDAs guidelines on this law are due this summer.

I am strongly against mandatory GMO labeling for several reasons, but the primary reason is that the very concept of “GMO” is vague and imprecise. You cannot regulate something that you cannot define. You can, of course, simply make up an operational definition (like the USDA did for “organic”) but if there is no real scientific meaning behind that definition, what exactly are you regulating?

The current working definition of genetic modification is, “rearranging, eliminating, or introducing genes in order to get a desired trait.” Of course, by that definition all hybrids are GMOs. When you crossbreed two varieties you are introducing new genes. What about mutation breeding, the use of radiation or chemicals to create mutations in the hope that the occasional mutation will be beneficial. Why isn’t that genetic modification?

Continue Reading »

No responses yet

Apr 19 2018

The Real Problem with Echochambers

Published by under Logic/Philosophy

It has rapidly become conventional wisdom that the widespread use of social media has resulting in an increase in the “echochamber effect.” This results from people only consuming media that already is in line with their existing beliefs and ideology. This is nothing new, psychologists have long documented that people are much more likely to access information that reinforces their existing beliefs and biases, and much less likely to engage with information that directly challenges their beliefs.

One of the hopes of the internet is that it would help people break out of their self-imposed echochambers of thought, by making a greater diversity of information, opinions, and perspective a mouse click away. That dream was thwarted, however, by the real world. Social media giants, like YouTube and Facebook, trying to maximize their own traffic, developed algorithms that placed information in front of people that they were most likely to click – meaning the kind of information they are already consuming. Watch one video about dog shows, and you will find a helpful list of popular dog show videos on the right on your browser. The next thing you know your mild interest in dog shows becomes a fanatical obsession. OK, maybe not, but that is the concern – self-reinforcing algorithms will tend to have a radicalizing effect.

There are also clearly virtual networks on the web developed to function like echochambers. There are blogs and channels dedicated to one specific world view, or opinion on a specific topic. The site is curated to be friendly to those with the same view, who are welcomed as compatriots. If you disagree with the approved view of the site, you are a troll. Your comments are likely to be blocked and you may even be banned. Of course, people have the right to protect their sites from truly disruptive and counterproductive behavior, but what makes a troll is in the eye of the beholder.

There are also metasites that curate multiple other sites, as well as news items, that cater to one world view, whether it be a political faction, specific activism, or ideology.

Supporting this echochamber narrative is the fact that people are becoming more polarized, tribal, and emotional over time. People hold more negative views of their political opponents, and are less likely to think that, while they disagree, they have a valid perspective.

The hope of the internet seems to have backfired. Rather than bringing people together, the internet has facilitated people separating themselves into multi-layered factions. The web is tribalism on steroids.

Continue Reading »

No responses yet

Apr 17 2018

The Rise of the Dinosaurs

Published by under Evolution

It’s clear from the fossil record that at times in the history of Earth there have been massive and geologically rapid changes in the assemblage of life. For each such change, however, there is the question of what caused the change. Perhaps the most famous is the K-Pg extinction (Cretacious-Paleogene extinction) about 66 million years ago, known for the near extinction (except for birds) of perhaps the most iconic prehistoric clades, the dinosaurs.

Although there is still some legitimate debate about the relative contribution of various factors, the coup de grace for the non-avian dinosaurs appears to have been a large impact.

Mass extinctions, however, are not the only type of rapid change that needs explaining. There is also the rapid proliferation of new evolutionary groups. The two types of events may, mass extinctions and proliferation of new groups to dominance, may often be linked. It makes sense that a mass extinction will leave many unfilled niches to fill. They leave an evolutionary vacuum, and surviving species get sucked in, rapidly adapting to fill all the voids.

That does not mean, however, that all instances of new groups coming to dominance are in response to a recent mass extinction. There could be other causes, such as climate changes that do not result in mass extinction, or a tipping point in the ecosystem in which previously stable relationship are disrupted. Perhaps a group just hits upon a new ability or strategy that give it a significant advantage over competitors.

That is arguably what happened with humans. When our ancestors discovered fire and cooking, it enabled them to more efficiently extract nutrients from their food. This turned out to be a huge innovation, supporting larger brains and larger populations. With hunting and cooking, humans spread throughout the world, causing probably the greatest ecological disruption by one species in the history of the Earth.

Continue Reading »

No responses yet

Apr 13 2018

Free Speech Crisis Revisited

Three weeks ago I wrote about a recent survey of attitudes on college campuses regarding free speech. I and many other bloggers used the new data as an opportunity to make a few skeptical points.

First, the data does not support the popular narrative that there is a free speech “crisis” on college campuses. The long term trends show that support for free speech is increasing, and that college education and being liberal both correlate with more support for free speech. These trends directly contradict the standard narrative that liberal college professors have “run amok” with their political correctness.

In response Sean Stevens and Jonathan Haidt wrote a couple of articles arguing that the skeptics were wrong on this issue. To be as fair as possible, I do think they have one small point to make, but overall I think they are tilting at a straw man of their own making. I also think they are making the exact kind of errors of biased interpretation that they are accusing the skeptics of making.

The legitimate point they make is that while the long term trends are positive toward free speech, recent data suggests that the current generation (iGen) entering college may be reversing that trend. At least, we should consider this recent data in formulating any opinions about the current state of affairs.

Continue Reading »

No responses yet

Apr 12 2018

A Spaceplane Update

Published by under Technology

There are some technologies that I have been reading about essentially my entire life. It’s interesting to now read breathless articles about a new exciting technology, and realize that I read similar articles in the 1980s.

Of course, many technologies have materialized in the last 30 years, but some seem frustratingly difficult, and it’s hard to even know if we are really any closer now than when I was 20.

One of the frustrating technologies is the single stage to orbit spaceplane. A recent article from the BBC announces a new initiative to develop a space engine, and everyone seems very enthusiastic about what will result (as I was in the 1980s when I first read about such engines).

For a little background, the term “spaceplane” can refer to several types of vehicles. Essentially the term is used for any vehicle that flies like a plane in the atmosphere but can also go into space. Atmospheric flight, however, might only include the return to Earth phase. So the Space Shuttle is a spaceplane. It takes off vertically using rockets, but when it comes back to the ground it uses its wings for lift and lands like a plane.

So far the Space Shuttle was the only manned spaceplane. There was also the Soviet Buran, and SpaceshipOne, both of which were unmanned. Two other spaceplanes were intended to make it to orbit and took off horizontally, ascending part of the way like a jet – the X-15 and the X-37. Neither of these went fully into orbit, however.

Continue Reading »

No responses yet

Next »