It appears that the NSA will need some better coders

Aug 19, 2016 19:20 GMT  ·  By

Stephen Checkoway, an Assistant Professor at the Department of Computer Science at the University of Illinois at Chicago, has analyzed some of the exploit code included in the recent Equation Group leak, and his verdict is "not impressed."

Over the past weekend, a person or group named The Shadow Brokers published a set of hacking tools they claim to have stolen from the Equation Group, a name given by security vendors to a cyber-espionage group believed to be linked to the US National Security Agency (NSA).

The hackers dumped a small sample of the tools so that security researchers can verify the dump's validity. The rest of the data is available in a password-protected encrypted archive.

The Shadow Brokers are currently holding an auction to sell the rest of the data to the highest bidder.

Infosec experts rush to analyze The Shadow Brokers leaked exploits

In the meantime, Kaspersky has confirmed the leaked data is authentic, and companies like Cisco, Fortinet, and GuardWatch have issued advisories to let customers know how to protect their products.

But it's not just big-name businesses. Independent security researchers like Mustafa Al-Bassam, aka tFlow, co-founder of the LulzSec hacking crew, have also analyzed some of the exploits and discovered that the NSA-linked group could extract VPN keys from certain devices.

The most recent person who has taken a look at the code is Stephen Checkoway, who teaches Software Vulnerability Analysis and Advanced Computer Security at the University of Illinois, Chicago.

Prof. Checkoway put some hours aside to look at the source code of the BANANAGLEE exploit, which targets Juniper firewalls. The reason he analyzed this exploit is that he's familiar with Juniper devices, being the lead researcher for "A Systematic Analysis of the Juniper Dual EC Incident," a research paper set to be presented in October 2016, at the ACM Conference on Computer and Communications Security.

Checkoway: This is ridiculous!

The professor didn't look at the entire codebase, but only at the key generation system and the process of redirecting IP packets.

"This is ridiculous," Checkoway writes regarding the random key generation system. "There’s no reason to read 32 bytes from /dev/urandom. There’s no benefit to calling rand(3) so many times. It’s a little ridiculous to be seeding with srandom(3) and calling rand(3), but in this particular implementation, rand(3) does nothing but call random(3)."

Truly, a reason not to be impressed, and the professor doesn't stop here. "But worst of all, rather than having 2128 possible 128-bit keys, this procedure can only produce 264 distinct keys!" which means the key generation system was yielding a much smaller number of options to choose a random key, and all of it was the result of bad coding. "It's a 1/18446744073709551616 fraction of the total 340282366920938463463374607431768211456 possible 128-bit keys," he added via email.

There are some good parts, but the cryptography is really bad

Prof. Checkoway was a little bit more impressed with the process of hiding the attack source through multiple IP redirections, which he called "kinda neat." But the praises stopped there. "[B]oth the code and the crypto are bad. Very bad," he says.

The professor adds the code has some "boring memory leaks," but the part that really ticked him off resided in the mechanism that encrypts IP packets sent via this redirection process.

"I’m no cryptographer, but this seems crazy," Checkoway explains. "An IV [cipher initialization vector] should never be reused for a given key. And yet identical messages will produce identical IVs, even if the keys are different. Perhaps there’s something that guarantees a message will never be sent twice, but if I were designing this, I sure wouldn’t rely on that property."

To conclude, Prof. Checkoway found that 128-bit keys are actually generated with 64 bits of entropy instead of the intended 128, the "supposed" NSA coders repeated cipher IVs for the encryption, there was no authentication of the encrypted communications channel, and there was "sloppy and buggy code."

"Maybe I simply picked bad tools and the others are all fantastic, but I kind of doubt it. Overall, I’m not impressed by what I’ve seen here," the professor concludes on his blog.

NSA coders are still better than students. Not up for debate.

Nevertheless, in an email to Softpedia, Prof. Checkoway says that the experience wasn't that bad in the end.

"I am interested in analyzing this sort of malware. Such analysis gives us insight into the practices and methods of the malware's authors. As the NSA is widely considered to employ many of the best hackers and cryptographers, there may be a lot we can learn from studying their code," Prof. Checkoway wrote.

Comparing the NSA code with something that his students would write is out of the question, Prof. Checkoway explains.

"I don't think it makes sense to compare the code I examined with student code for several reasons. One, I only examined a tiny fraction of the leaked code and I have no idea if that is representative of the rest of the code.

"Two, computer security is generally regarded as an advanced topic so undergraduates who take my courses generally have little or no security background coming into the course which means they are still learning the material.

"Three, students write code to solve assignments. That code is frequently expected to work one time (when we grade it) in a known and controlled environment. In contrast, the leaked code was written by professionals. I would expect it to work in a hostile environment," the professor said.

But overall the professor is still disappointed when it comes to the cryptography part, as he explains.

"I would expect relatively bug-free code. And I would expect minimal cryptographic competence. None of those were true of the code I examined which was quite surprising."