Cryptography

Published on June 1st, 2019 📆 | 1990 Views ⚑

0

NSA hacking tools released by Shadow Brokers spark new security debate – Axios


text to speech

Security experts are drawing differing lessons from the latest report of the alleged use of secret NSA hacking tools by a criminal group. Some argue the NSA needs more oversight, while others say that organizations need to be more vigilant about updating the systems the NSA tools target.

The big picture: These two remedies aren't mutually exclusive. But neither is easy to achieve. 

Driving the news: The debate flared after the New York Times reported that attackers responsible for Baltimore's recent ransomware incident used a program believed to be created by the NSA.

  • The same program was at the center of WannaCry, a landmark global malware disaster in 2017.
  • All it takes to stop that program's line of attack is to update Windows.

Background: The NSA code, known as EternalBlue, leaked in 2017 as part of a year-long dump of agency files online by a cryptic hacker group called the Shadow Brokers.

  • EternalBlue can be used to turn Windows malware into worms — malicious code that spreads by itself from machine to machine.
  • By the time of the WannaCry outbreak, Microsoft had already released a patch that protects Windows systems from EternalBlue.

Between the lines: Whether the NSA needs more oversight in developing tools has no bearing on whether people should patch, and vice versa. And fully achieving either solution alone might not be possible.





  • While there are a ton of bad reasons organizations delay patching systems, there are good reasons, too. Installing untested updates can create chaos for niche software and hardware.
  • And there's already more oversight in place for agencies than most people realize.

Details: The executive branch does have an oversight structure in place, known as the vulnerabilities equities process. Any time agencies want to keep a vulnerability they discover secret so it can be used for surveillance, they have to make their case in front of a special interagency panel.

  • "The VEP is meant to be a risk minimizing process, but that doesn't mean there is no risk," said Michael Daniel, current president and CEO of the Cyber Threat Alliance and the former cybersecurity coordinator at the Obama White House when the VEP was created.
  • The process takes into account the possibility that a vulnerability might be leaked, stolen or discovered, but that will always be a risk, since there's always a chance a target will intercept a tool.
  • Nonetheless, Daniel argues, most Americans wouldn't want to place severe limits on the use of such tools that the intelligence community couldn't do its job.

Where it stands: After WannaCry, it's likely that the VEP has already adopted a stricter approach toward approving "wormable" tools.

  • We know from WannaCry and subsequent attacks that organizations are slow to apply patches. That's a consideration in the process.
  • When the Trump administration posted the criteria for the VEP in 2017, one of them read: "Will enough [U.S. systems] actually install [a] patch to offset the harm to security caused by [adversaries using a] vulnerability?"
  • Daniel notes that even pre-WannaCry, wormable tools don't mesh well with the U.S. intelligence philosophy. Security researchers outside the government often comment on the relative restraint observed by modern U.S. government-built malware to avoid hitting unintended targets.

The bottom line: Ultimately, there may be less room to build out oversight than critics hope and a ceiling to how much applying updates can improve security.

Source link

Tagged with:



Comments are closed.