Mar 21 2017

Y2K and the Year 2038 Problem

unix2038I was recently asked about the year 2038 problem as it relates to the Y2K bug. Specifically, it seemed like the Y2K bug was a non-event, so should we similarly not worry about the year 2038 problem.

Lessons from Y2K

At this point some of you may not know what I am talking about, so first some history. When the modern computing age was being developed back in the 1950s memory was at a premium. For this reason dates were represented by six digits – MM/DD/YY.  Just two digits were used for the year, assuming that all years had the prefix 19. So 01/01/80 was January 1st, 1980.

The first person to recognize this was a potential problem was Robert Bemer in 1958. Apparently he spend the next couple of decades trying to convince his fellow programmers this was a problem, but no one listened. Talk of the year 2000, or millennial bug (often shortened to Y2K bug) didn’t really spread until the 1980s, and no one took it seriously until the 1990s.

The potential problem is that once the date turned over to January 1st, 2000, computers would only record that as 01/01/00, and treat it as 1900. This might cause systems to crash, and by 2000 much of our society was controlled by computers, from banking to air traffic control. In the 1990s the Y2K bug went from a non-problem to a mild panic, with the most dire warnings talking of civilization collapse.

So the first lesson is that people tend to be short-sighted, thinking of now more than even the near future. It seems that computer programmers thought that the year 2000 was so far off, computers and software would be so different by then, that they didn’t need to worry about it – until literally the 1990s. The immediate benefit of saving memory usage was more important that a potential distant problem decades away. Let future programmers worry about that.

Everyone, it seems, underestimated the longevity of software architecture and how embedded that would be. As a society we ignored the problem and then panicked when it came close, and tried to fix it in a mad rush.

It is estimated the government and private expenditure to fix the Y2K bug was about $100 billion. When the year 2000 rolled around – nothing happened. None of the dire warning of the Y2K bug were manifest. This led many to believe that it was never a threat in the first place. The recent question I received essentially was asking that question, did we avert Y2K or were the warnings overblown to begin with?

Expert consensus is that the $100 billion were well spent. There was no Y2K crisis because of all the efforts to fix the problem, to change millions of lines of code so that the date would be represented by 8 digits instead of 6, with four digits for the year. Of course, this creates a Y10K problem, but I guess we can let future programmers worry about that.

Here is the next lesson – successfully preventing problems results in a non-event. If you do your job properly, no one notices, or they may even question the need for your job in the first place. No one notices the terrorist attack that never occurred. No one notices all the diseases that are not happening because of vaccines.

Further, we sometimes have to prepare for the worst-case scenario and then hope it never occurs. This is what happened with the H1N1 flu. The CDC saw it coming, added H1N1 to the flu vaccines for that year, and then the H1N1 epidemic was a fizzle. Then they were criticized for exaggerating the risk, but that is not what happened. There was a range of possibilities, they prepared for a reasonable estimate of the more severe end of the spectrum, and we got lucky. Of course, if they underprepared and we had a severe epidemic, everyone would have blamed them.

That brings us to the next lesson – hindsight is 20/20. Everyone is a Monday-morning quarterback. Preparedness is about reasonably accounting for the range of possible outcomes. Our ability to extrapolate into the future is limited, and we should err on the side of being a little cautious. This is a rational application of the precautionary principle.

There are people whose job it is to do this, and we should not criticize them when nothing happens or accuse them of hysteria when the dire end of the spectrum does not manifest. They should be judged on how they allocated resources based upon what was known at the time.

So What About the 2038 Problem?

There is another computer glitch in the works also based on how computers count time. This one is more limited in distribution than the Y2K bug but there is still the potential for problems. Computers count time by counting the number of seconds since an arbitrary start date, usually January 1, 1970 (called the “epoch”). Computers that use 32-bit encoding for this process can handle 2,147,483,647 seconds, which gets us to 03:14:07 UTC on Tuesday, 19 January 2038. At that point the digits will wrap around to the maximal negative number the code can handle, which will be interpreted as December 13, 1901.

The type of computer code that uses this kind of time format tends to be embedded in technology, rather than on desktop computers. Embedded systems are found in cars, transportation technology, communication devices, and other technology. Even though 2038 is more than 20 years away, it is possible that some of these systems will still be in use at that time.

The fix is to use 64-bit encoding. This can count enough seconds to last to 292 billion years. In fact, we could use 64-bit encoding and count milliseconds, or even microseconds, instead of seconds and still have enough for 300,000 years. This would give higher resolution to computers’ time stamps. In any case, we should settle on a standard and use it. It seems to me that 300,000 years is a comfortable margin for any such technology.

Since we have 20 years to phase out 32-bit time stamps, I am not worried that the 2038 problem will actually cause any crashes. It will also not take a herculean effort like fixing the Y2K bug did. But, the issue should not be ignored, and 32-bit time systems should be phased out now to make sure there are no problems in 2038.

Conclusion

In a way the Y2K problem was a giant psychological experiment on our modern society. It demonstrated that collectively we tend to ignore problems that seem far off, even just a couple of decades away. We will take short term benefit in trade for long term problems, and let our future selves, or future generations, deal with the consequences.

This may also reflect an optimism about technology. I think people just figured that computer technology would advance enough to solve the problem on its own.

We can also be collectively lazy when it comes to evaluating risk and even recent history. We easily fall into hindsight bias and use motivated reasoning to argue that we were right to ignore the problem all along. We will tend to blame those who are in charge of avoiding problems no matter what happens. If they fail to prepare for a adverse event, they didn’t do enough. If the adverse event does not occur, then they prepared too much, or it was a false problem to begin with.

Think about this as it applies to current issues, such as global warming. It is easy to accept the short term benefits of cheap energy and let future generations worry about the consequences. It is easy to have casual faith in future technology to somehow solve the problem for us. And, no matter what happens or doesn’t happen, people will rewrite history to suit their narrative.

Just look at vaccines. Over and over again vaccines solved major health problems. Now anti-vaxxers argue that the diseases were not a big deal in the first place, and that they would have gone away on their own without the vaccines. It’s easy to impose your narrative onto history.

So, will future generations criticize us for not doing enough about global warming, or will they believe that it was never a threat in the first place, and that it was all hysteria like Y2K? I’d rather the latter.

 

 

28 responses so far