Volume 29 Number 6
Editor's Note : Bleeding Heart
Michael Desmond | June 2014
The Heartbleed vulnerability in the OpenSSL implementation has been decried as perhaps the greatest code security flaw the Internet has ever seen. The flaw potentially made secure connections created using OpenSSL an open book to third parties. And like so many software calamities, it was the result of a simple gaffe.
The Heartbeat Extension feature of OpenSSL was implemented in 2011 with a parameter that enabled clients to specify the size of the packet to be sent by the server in response to a Heartbeat Request message. But the feature failed to account for the actual size of the defined payload. A client could request a larger request message than the payload required. And because of the way OpenSSL allocated memory, the returned message could include the payload, plus whatever contents were currently allocated in the memory buffer at the time.
The result was a kind of reverse buffer overrun, where memory that wasn’t supposed to be there got transmitted—unencrypted—back to the client. And, often, that excess memory space contained the private keys for connected Web sites.
This is a Homer Simpson-level gaffe, and for two years nobody noticed the gaping hole left in supposedly secure OpenSSL connections. Alex Papadimoulis, founder of consultancy Inedo and creator of the popular software blog The Daily WTF (thedailywtf.com), has for years chronicled the sad, frustrating and sometimes hilarious tales of software development gone wrong. He compared Heartbleed to a national chain of banks forgetting to lock the vaults at night ... for a month.
“It takes a whole new level of bad to surprise me anymore, but that’s exactly what Heartbleed delivered,” Papadimoulis told me in an e-mail interview. “It wasn’t your typical WTF, in that it was bad code created by a clueless coder. It was more the perfect illustration of compounding consequences from a simple bug.”
The Heartbleed flaw serves as an urgent and humbling reminder that the history of software development is riddled with bonehead mistakes. I’m tempted to set a scale for astonishingly avoidable software flaws, with the top-most range set at Mars Climate Orbiter (MCO). MCO was a half-billion dollar-or-so mission to put an advanced satellite in orbit around Mars. It failed.
The $328 million craft burned up in the Martian atmosphere because the NASA spacecraft team in Colorado and the mission navigation team in California used different units of measure (one English, the other metric) to calculate thrust and force. Space exploration may not be a game of inches, but it is a game of newton-seconds, and an astonishingly simple mismatch doomed the mission to failure.
Where on the MCO scale might Heartbleed fall? Somewhere around 0.85 MCOs, I think. The MCO failure cost half a billion dollars and denied us a decade of priceless, scientific exploration, but the actual costs of Heartbleed may never be fully disclosed or realized. While quick action likely prevented catastrophic damage, we do know that the Canada Revenue Agency reported the theft of critical data for some 900 taxpayers. And I have little doubt that additional disclosures will be in the offing.
“The scale of this vulnerability is what makes it so remarkable, but this type of bug is unavoidable,” Papadimoulis says. “The best thing we can do is make our applications and systems easy to maintain and deploy, so that when bugs like this are discovered they can be patched without issue.”
It’s good advice. As long as humans create software, software will have flaws. So best to be ready to address those flaws when they emerge. Where do you think Heartbleed sits in the annals of botched software development? Email me at firstname.lastname@example.org.
Michael Desmond is the Editor-in-Chief of MSDN Magazine.