New Year’s Eve. The tangible electricity in the air. The unbridled optimism of a full year of new possibilities. The chance to get a little wild, and make a few mistakes. Who cares? It will all be wiped clean the next day.
Well, unless you make the mistake of unleashing the most calamitous open source security vulnerability ever created. That might take a little longer to fix.
How It All Began
On December 31st, 2011, Robin Seggelmann, a German Ph.d. candidate and veteran programmer submitted the Heartbeat Extension to the world’s most widely-used online security solution: OpenSSL. Years later the world would learn that the extension contained a bug, later dubbed ‘Heartbleed’, which slipped past the open source project gatekeepers: two guys named Steve.
The bug’s moniker derives from the Heartbeat mechanism, in which devices send a data request to servers to confirm a live connection. The OpenSSL runs on those servers responds by returning the specified data, as long as it falls within a prescribed limit (its ‘buffer’). Seggelmann failed to correctly program the buffer length variable, opening a window for malicious hackers to trigger a buffer over-read. Duped servers would then return, or ‘bleed’, an inordinately large amount of data from their memory, including passwords, private messages, and other sensitive information. The security exploit could be compared to a particularly bad case of TMI. You ask someone for their name, and instead, they tell you their name… and their social security number… and their ATM pin code… and intimate details of their sex life… and, well, you get the picture.
Heartbleed was not identified — or according to some, publicly reported — until April 2014, a jaw-dropping two years after its integration into the OpenSSL project. By this time OpenSSL was used by an estimated 2/3 of web servers, leaving the majority of the world’s web traffic compromised. Major businesses and even government organizations scrambled to regenerate keys, reissue SSL certificates, and urge their users to immediately change passwords. OpenSSL released a software patch within a week of the bug’s disclosure, but the damage had been done and appears to be ongoing.
Not Enough Eyeballs
In the aftermath of the bug’s outbreak, many tech pundits opined as to what exactly could be done to prevent similar debacles in the future. Even staunch open source advocates repeatedly threw shade at the small OpenSSL team for its perceived lack of oversight. Others blamed OpenSSL’s overreliance on the C programming language, which lacks the built-in restrictions on buffer access employed by other languages.
But the most widely-echoed response was a resounding criticism regarding the contradiction between the fact that the largest corporations in the world, including software giants like Microsoft and Google, were using open source projects developed by volunteers, without commensurate investment into the volunteer-heavy teams that build it.
As a result, numerous nonprofits sprouted up with the goal of providing more money and human capital to oversee OSS development. The largest one being the Core Infrastructure Initiative (CII). This funding is used to have more resources to review and improve open source projects, since even the most seasoned programming expert is, after all, only human and prone to error. Herr Seggelmann can attest to that.
But, unfortunately, there was an important lesson missed by the industry. We can see that today, three years after the Heartbleed vulnerability was released, hundreds of thousands of software products are still using the vulnerable versions of OpenSSL. The reason is that they are probably completely unaware to the fact they are using the vulnerable versions. Tech journals have noted this lingering phenomenon, but few have taken the opportunity to reflect as to what is the industry doing wrong.
The answer, to my opinion, is that many companies using open source components have yet understood that they must manage their open source usage and ‘keep in touch’ with the open source community to learn about new vulnerabilities, bugs and new versions of the components they have integrated into their own products. This can be easily obtained with automated open source management platforms, like WhiteSource.
After all, what’s the point with streaming more funding to open source projects’ teams to improve the security and quality of open source projects if companies using those open source components are not aware of security vulnerabilities and remediation being discovered and released?
One thing is certain, is software development teams using open source components (and between us – who isn’t?), then the number of programs still containing the bug today would be insignificant and not 200,000, as they would be regularly alerted on the vulnerability and its remediation. Software teams need to understand that with great power comes great responsibilities and it is time to own up to it.