Aug 23 2018

The Superconductivity Hubbub

A recent paper, published to arXiv, claims to have a method for making a superconducting material at ambient temperatures and pressures. The authors, Dev Kumar Thapa, Anshu Pandey, are from the Indian Institute of Science, and have garnered a lot of attention for their paper. However, recently there was published another paper on arXiv by Brian Skinner from MIT. Skinner noticed a suspicious pattern of repeating noise in two sets of data from the Thapa-Pandey paper. This could be a signature of fabricated data.

Scientific American has a good summary of the whole story, but that’s the quickie version. Skinner did not make any accusations, just published his analysis with a question to the original authors to explain the repeating pattern. Thapa and Pandey have responded only to say their results are being replicated. The rest of the physics community is not satisfied with this response, and are calling for them to send their material to outside labs for completely independent testing.

Another wrinkle to the story is that Pratap Raychaudhuri, a physicist at the Tata Institute of Fundamental Research in India, floated a hypothesis that perhaps the noise is not noise, but a signal resulting from “the natural rotation of particles within a magnetic field.” If that’s the case, then the pattern should replicate. So we are still left with the need to independently replicate the experiments.

The stakes here are high because so-called room temperature superconductivity is one of the holy grails of material science. Superconductivity means that electricity can flow through the medium without resistance, and therefore with no loss of power. A room temperature superconductor could therefore transform electronics, the power grid, and anything using super powerful magnets (like MagLev trains and MRI scans).

The current dominant theory as to how superconductivity works in certain materials two electron can come together to form what’s called a Cooper pair. This pair of electrons can then travel long distances in the material without resistance. However, Cooper pains can only exist at very low temperatures. So the quest has been to find materials that will allow Cooper pairs to exist at higher and higher temperatures.

All of this suggests to me that Thapa and Pandey probably did not commit deliberate fraud. It would become too immediately apparent that they did not create ambient temperature superconductors. There are immediate practical applications just waiting to happen, and of course they wouldn’t if the science did not actually work. I know that this still happens. Any faked science should eventually be found out, because if it’s not real then it won’t replicate and anyone trying to build on the results will find contradictory outcomes. I suspect most scientists faking data think they are still correct, they are just cutting corners because science is hard and takes a lot of resources. Either that or they are so desperate they are only looking at the short term.

What happens more often is that scientists simply make mistakes, or they take shortcuts they think are valid but which actually constitute p-hacking or bogus methods that render the data useless. This could be more of a Pons and Fleischmann situation – the scientists who prematurely announced achieving cold fusion.  They were sincere, just wrong, and their mistake was announcing to the press before peer-review. Again, any fame from falsely claiming to have produced cold fusion would be short-lived. In 10 years we either will or won’t be running the world on this new form of energy.

So I guess the lesson here is – don’t fake data for science that has immediate practical applications.

I am not familiar enough with the technical details of the Thapa and Pandey study to know if it is plausible that the repeating noise was an honest mistake. But I can give them the benefit of the doubt until the study is replicated and the meat-grinder of peer-review works its way through to some consensus. At the least it will be an interesting scientific drama to follow.

The other point I wanted to emphasize here is the bigger issue of using data analysis to screen for error and fraud. Journals, now, use software to screen submitted works for plagiarism. That’s great, and is a good way to prevent embarrassing episodes before they occur. The same approach could theoretically be used for scientific journals, to screen not only for plagiarism (which is already done – even self-plagiarism, or trying to publish the same paper multiple times under different titles), but also for data manipulation. Part of peer-review is to screen for signs of fraud, but I don’t think science journals routinely put all submitted data through software algorithms designed to screen for manipulation.

Again – some of these techniques and software programs already exist, but I wonder how extensively they are used and how comprehensive they are.

The idea is that genuine data is hard to duplicate. A scientific experiment may generate massive amounts of data. In order to properly fake data you would need to duplicate the generation of the data but with a shift in the desired direction. That is really hard to do without leaving signs behind of the manipulation. People, for example, are really bad at faking randomness. Also, we don’t have the intuitive statistical chops to know what a genuine set of data should look like. Therefore, statistical examination of faked data should be able to reveal non-random patterns where there should be randomness, repeats in the data, or other types of manipulation.

That is basically what happened with the Thapa and Pandey paper. Skinner noticed a repeating pattern of what should have been independently random data. He noticed by accident – he was not looking for it. It seems that a quick run through a program designed to find such patterns would have easily picked up the anomaly. The arXiv server is a non-peer-reviewed pre-publication server – a place for scientists to put their data up for their peers to look at while it is still in the publication process. It’s a great way to quickly share data – but it is not peer-reviewed prior to publication.

Skinner finding an anomaly is exactly why the arXiv is a great idea. Now we have to let the self-corrective aspect of science play itself out. I agree with the scientists calling for completely independent replication. That is usually the final arbiter in science – real phenomena replicate, errors and bad methods do not.

No responses yet