Y2Krazy

January 2001
By Robert Novella

We survived the arrival of a new millennium without significant incident, despite numerous predictions of societal collapse by a host of doomsayers and
apocalypse predictors. What were the origins of the anticlimactic Y2K bug and the fear it inspired?

Much to the surprise of many people, the world did not end this past New Year’s eve. Despite all the dire predictions, the power-grids did not fail, planes did not fall out of the sky, and nuclear missiles were not accidentally launched. Doomsayers preaching about these inevitabilities were ubiquitous. People were so convinced of these predictions that many liquidated their bank accounts, stocked up on water and canned goods and headed for the hills. Unless you’ve been off planet for the past few years, the term Y2K has already flashed through your mind. This is, of course, more than just a clever shorthand for “Year 2000,” it has become synonymous for the king of computer bugs that was supposed to bring modern civilization to its knees at the stroke of midnight December 31st, 1999. What is the origin of this computer problem and why did so many people so firmly believe that a worldwide apocalypse was imminent?

A Brief History of Y2K

The Y2K or Millennium bug panic can be traced back to something as innocuous sounding as the date representation used by modern computers. There are many ways to represent the date in computer software. The practice that became standard uses 6 digit dates (dd/mm/yy) instead of 8 digit dates (dd/mm/yyyy). Two digits are fine for calculating dates that reside within the same century but once the century or millennial boundary is crossed the output can become nonsensical because the year 2000 and 1900 become indistinguishable. For example, if it’s 1999 and I was born in 1959 then the software will calculate my age as 40 (99 – 59). In the year 2000, however, the result would be negative 59 (00 – 59).

A less well-known date misrepresentation that is also considered part of the millennium bug involves leap year calculations. Our Gregorian calendar uses three simple rules to determine if a year is a leap year. The one most everyone knows is that the year must be evenly divisible by four. The second rule states that if the year is also divisible by 100 then it is not a leap year. Thus many double zero years like 1900 and 1800 were not leap years although they are divisible by four. The third rule is the exception to the exception. It states that if rules one and two hold but the year is also divisible by 400 then the year is leap. Programmers typically failed to take this last rule into account and therefore on February 29, 2000 some computers might believe it’s March 1st. Now if someone starts talking about Leap2K, you’ll know what he or she is talking about. (Unrelated geek fact: due to the minuscule annual decrease in the Earth’s rotational speed, every now and then a leap-second will be added to a month to make up the difference. Don’t worry, this won’t affect your computer).

How a computer would handle such date problems was the crux of the entire apocalyptic scenario. Crucial computer systems that use date calculations were portrayed as time bombs that would explode on January 1st taking modern civilization with it.

Why 2-digit date calculation became the defacto standard is usually attributed to scarce and expensive computer memory and storage space of the 50’s and 60’s. My home computer has a 10 gigabyte hard drive and 128 megabytes of memory. This is not uncommon today and it cost me a fraction of what it would have cost just five years ago. This trend has continued unabated for many years, but back in the early days of electronic computing a minuscule amount of RAM (random access memory—the working memory of a computer) and hard drive space cost many thousands of dollars. The repeated use and storage of any superfluous data by necessity, therefore, had to be kept to a minimum. Consequently, the seemingly unnecessary hundreds and thousands part of the date were dropped. A year then simply became “65,” for example, instead of 1965. This explanation is the most common but it might be apocryphal. Economist Edward Deak believes that the seeds for the Y2K fiasco were truly sown in the early 20th century by the pioneer of modern information processing, Herman Hollerith.

Herman Hollerith (1860-1929), a statistician and inventor, inadvertently started a new industry in the 1880’s because of a weaving loom. Jaquard Loom’s, automated textile machines, employed a series of cards with holes in them to quickly and inexpensively produce intricately patterned cloth. The pattern of holes in the card permitted the loom to connect with other parts of the machine thereby determining the cloth’s pattern. Hollerith co-opted this concept and produced machines and cards to process information instead of cloth. (Its first application was the collation of data for the 1890 census, which probably would have taken more than a decade to complete had it not been for this invention.) The company he was to create from this inspiration would eventually, in 1924, become one of the most influential and powerful computer companies, IBM. Hollerith punch cards were popular for many decades but it is their limited amount of information that might ultimately be responsible for the Y2K panic of the late 1990’s. Early programmers, faced with only 80 columns of information per card, had to economize as much as possible. Truncated dates on punch cards soon became the norm for the industry.

Ultimately, the Y2K computer bug can be attributed to three reasons. As described above, limited computer memory (or space on punch cards) forced programmers to store four-digit years as two digits. The second most commonly cited reason is the unexpected longevity of the countless programs written years ago. A surprising number of programs written in the 1960’s and 1970’s are still in use today. I’m sure that most of the programmers responsible for this code were more concerned about finishing their current projects and were not worried about what would happen if their programs were run in the year 2000. The amount of such code that needed to be examined has been estimated from 250 billion to well over 1 trillion lines. It is this sheer quantity that has proved so daunting. A good analogy I recently came across compares going through all this code to fixing all the chairs in the United States. Fixing one chair is not difficult but there are so many of them that the task would require a truly herculean effort. The final reason, one I had not previously considered, is blamed on the once ubiquitous preprinted forms. Today we are spoiled by programs that make it relatively easy to format and print forms, documents, letters and so much more in this increasingly non-paperless society. Early computers, however, required laborious preparation and each element in the output had to be carefully accounted for. Since forms commonly used a preprinted number “19” in the date section, it was easier to simply store the date as 2 numbers.

Doomsayers and Naysayers

Who in this country is not familiar with the image of a rather odd looking person walking down the street and carrying a sign that says “The End of the World is Near” or some equally dire pronouncement. If you’ve never actually seen someone like that, then you’ve certainly seen this scene played out in some movie or on television. In the past year or so it seemed that these odd people had somehow duplicated themselves over and over and spread throughout the world to preach their apocalyptic beliefs. What follows are a few of my favorite quotes regarding the end of the world on 1/1/00.

“We must also prepare ourselves for the very real possibility that the outcome of this situation might well be the total extinction of the entire human race. It really could be worse than I am predicting and I really am being optimistic. First, I would like to assure you that I am not some kind of nut anxiously waiting for the end of the world.…”
—Consultant Cory Hamasaki’s newsletter, November 1998

“At 12 midnight on January 1, 2000… most of the world’s mainframe computers will either shut down or begin spewing out bad data. Most of the world’s desktop computers will also start spewing out bad data. Tens of millions — possibly hundreds of millions — of pre-programmed computer chips will begin to shut down the systems they automatically control. This will create a nightmare for every area of life, in every region of the industrialized world.”
—Christian Reconstructionist Gary North, early 1997

“You have a month to live. Are you comfortable? Got enough food in the house? Electricity working? Got fuel for the car? Well, come January 1, 2000 all that is going to change.
Well that little Y2K problem is going to collapse all the economies in the world. No more food, electricity, fuel, communications etc. all those things are going to shut down like that, ‘snap’ (maybe a week or so) with them multi-warheads flying around and everybody eating everybody and the survival of the fittest will become the law until the antichrist takes over the world.”
— Robert Lavelle, Internet Doomsayer

This view was not just promulgated by self-promoting, send-me-money-for-my-book hucksters, or end-of-the-world nutcases. Common, ordinary people were genuinely alarmed that disaster was imminent and unavoidable. My own sister was utterly convinced that the proverbial excrement would hit the fan. When all the electricity went dead in my house moments after midnight, my sister shouted triumphantly “I knew it, I knew it” (it turns out the brief power outage was the result of a practical joke played by my brother, who slipped out unnoticed and shut down the main power switch in the garage).

A January 1999 TIME/CNN poll reported that 59 percent of Americans were somewhat to very concerned about the Y2K computer problem. Astoundingly, almost ten percent expected nothing less than the end of the world as we know it. How can such apocalyptic beliefs propagate so quickly through society and thoroughly take hold of so many people?

Thoughts and beliefs with favorable reproductive qualities can spread from person to person like colds in the winter or the way that red hair is inherited from father to son to grandson. Successful ideas out-compete others in a long-term memory survival of the fittest. Zoologist and author Richard Dawkins refers to these replicating units of cultural information as memes in his intriguing book “The Selfish Gene.” The term “meme” itself was coined as an analogue to biological genes (Dawkins ‘76). In Arron Lynch’s recent Skeptical Inquirer article “The Millennium Thought Contagion” he expresses this concept as “thought contagion” to denote the virus-like abilities of ideas to infect and spread throughout the populace. The field of memetics expresses this concept mathematically and can accurately model the evolution of memes.

The Y2K bug as a cause of the millennial apocalypse evolved into a virulent and powerful meme due to its quick transmission, a receptive public, and its persistence in people already infected. (Lynch ‘99) The world-girdling internet was infested with Y2K doomsday web pages, chat rooms, and newsgroups. A quick search for the term “Y2K” recently netted me 25,027 responses. Also, the press fell in love with the Y2K bug, inundating us with news stories, articles and commentaries about the possible dangers of Y2K and people’s reactions to it. In a short period of time, few people remained unexposed. The effect was like a flu spreading throughout a day care center.

In the waning months of 1999, this popularity and ease of transmission worked against the millennial madness, however. It gave the skeptics and voices of reason a platform to spread their own memes about the unrealistic nature of the impending doomsday predictions and the successful worldwide measures being taken to ameliorate any possible problems. People who embraced the doomsday meme, however, tended to continue believing due to a built-in persistence. The price of non-belief was just too high. Famine, riots, and the death of loved-ones are a great motivator and are one of the reasons for the meme’s staying power.

Conclusion

It is now clear that the fear surrounding the rollover to the year 2000 was completely unfounded. To the surprise of many, as the new year jumped from time zone to time zone on the morning of January 1st, a wave of technological failures and societal collapse did not arrive. Part of the reason must be attributed to the massive effort that was exerted worldwide to combat the bug. Indeed, this effort and expense (hundreds of billions of dollars) allayed any fears I used to have and had a similar effect countrywide. I was therefore not surprised that America’s computer infrastructure survived the new year. I was surprised, however, that countries that virtually ignored the Y2K bug also experienced only minor glitches. This strongly suggests that the potential for serious computer problems resulting from Y2K might have been much less than even the optimists predicted. It also means that the predictions of the Y2K doomsayers were not merely way off, they were completely and hopelessly way off. Hopefully, now that the deadline has passed, many will look back at the hysteria and treat the next contagious meme with more skepticism.

References:

1) Dawkins R. (1976): The Selfish Gene, (Oxford University Press, New York).
2) Lynch A. (1999): The Millennium Thought Contagion: Is Your Mental Software Year 2000 Compliant? The Skeptical Inquirer, Vol. 23 #6.