| Andrea Peterson |
WASHINGTON (WP-BLOOM) – Microsoft recently patched a critical bug affecting Windows that researchers say could potentially allow hackers to remotely control users’ machines.
But the bug wasn’t some recent mistake.
The IBM researchers who found it say it has been around for nearly two decades, highlighting the difficulty of spotting and fixing bugs even in code that has gone through extensive review.
“Significant vulnerabilities can go undetected for some time,” wrote IBM X-Force research manager Robert Freeman in a blog post on the problem. “In this case, the buggy code is at least 19 years old and has been remotely exploitable for the past 18 years.”
The bug was present as far back as the original release code for Windows 95, he says.
The IBM team says it hasn’t found any evidence that the bug has been exploited.
Still, there’s a whole market for previously unknown computer software bugs where cybercriminals and even governments bid for ways to hack into computer systems.
IBM said that this newly discovered bug would have fetched six figures on this market, which occupies a legal grey area.
This isn’t the first time major flaws have taken years to uncover.
In 2010, a Google engineer uncovered a 17-year-old Windows bug affecting all 32-bit versions of the operating system and that could be used to hijack PCs.
In September, another problem called “Shellshock” was discovered in a free software package built into some 70 per cent of all devices connected to the Internet. It could have been introduced as long as 22 years ago, says Chet Ramey, the long-time maintainer of the code.
And there are other examples, like the infamous Heartbleed bug that emerged in April and had gone undiscovered for two years.
So why does it take so long for seemingly important problems in critical systems to be discovered and fixed? Part of it has to do with the process of software development and review.
Writing code is not like a traditional engineering task such as building a bridge, where there are clear definitions for whether a project meets technical specifications. Code is a far messier medium, and it can be hard to know how the individual pieces will work together when combined into a final product.
Developers also do their own assessments of products, and in many cases hire testers to look for obvious flaws. But the true test of the security of a piece of software often comes after it has been released. That’s when code is exposed to outside security researchers and hackers who start to pick it apart, looking for weaknesses.
Many companies, including Microsoft, offer financial incentives through bug bounty programs to make the process go faster. (There are people who make a living searching for bugs and collecting these bug bounties.)
But despite all these efforts, no one knows just how many bugs are out there, waiting to be discovered. And sometimes, it takes decades to find them.