Doomsday scenarios are so bracing, aren’t they? In the past week we’ve had the government’s chief scientist warning that there won’t be enough food for us all by 2050 and the European Union saying that we might have to fight Russia for minerals and other useful stuff in the Arctic when the going gets tough. And of course I suggested that the only sensible way to close our electricity generation gap would be to embark on a big nuclear power build. (Thanks for all your replies on that. Some were even polite. In answer to the uranium supply question, there are two points: one tends to find more of something once you go looking for it, and fast breeder reactors can produce up to 100 times more usable fuel than you start with. So I think uranium isn’t going to run out, at least until we can build a working fusion plant.)
While we’re on doomsdays, let’s not forget that the possibility of a flu pandemic – perhaps triggered by a mutation of the H5N1 bird flu virus, leading to an I Am Legend-style scenario where our cities are reclaimed by weeds and wild animals – hasn’t gone away; it’s just biding its time.
So let’s quickly examine another doomsday lurking in our not-so-distant future: that in 30 years’ time, the internet will stop working. Or at least, the bits of it that run on Unix. (For once, this is a tale where Microsoft comes out looking well-prepared.)
This is down to what’s being called the “2038 bug”. It arises because Unix-based systems store the time as a signed 32-bit integer, in seconds, from midnight on January 1 1970. And the latest time that can be represented in that format, by the Posix standard, is 3.14am on January 19, 2038. (It’s a Tuesday. Better make sure your desk is clean on the Monday night.)
After that? “Times beyond this moment will ‘wrap around’ and be represented internally as a negative number, and cause programs to fail, since they will see these times not as being in 2038 but rather in 1901″, to quote Wikipedia (tinyurl.com/dzxca).
Early examples of problems have surfaced. The AOLserver web server software tries to ensure that database requests will never time out, not by assigning “0″ to the timeout (which would have been sensible, programatically speaking) but by setting the timeout 1bn seconds (about 31 years) in the future. It crashed on May 13 2006.
But, you say, fretting about this is like worrying about the millennium bug in 1970 – when we were far too busy writing the software to bother about fixing it, which we did anyway in a couple of years, and there wasn’t any harm done. And it’s true that there are a couple of possible solutions, such as changing the counter to an unsigned 32-bit integer, doubling its potential lifespan (and shrugging the problem off until 2106). But that would mess up programs that try to calculate time differences – which is most of them.
The rise of 64-bit systems, with 64-bit counters, puts the problem off a little – about 290bn years, in fact. Yes, lots of us are getting 64-bit machines, and even operating systems. Unfortunately, as with the millennium bug, the risk lies in embedded systems – routers, petrol pumps, even 32-bit file formats that get used by 64-bit systems. Just as with the millennium bug, it will take a lot of expensive investigation to find out just how widespread the problem is.
I dropped an email to Paul Sheer, whose 2038bug.com site watches out for any related news. In 2003 he had wondered if 35-year bonds might show some sort of problems (because financial companies often run Unix systems, and such bones would appear to mature in the past). Anything to report, I asked?
“No reports of problems in a long while,” he replied briefly.
Phew. Perhaps we can all relax for another 25 years or so. Now, what’s this I hear about an asteroid on a collision course with Earth..?
Via The Guardian