News

Published on July 31st, 2019 📆 | 3274 Views ⚑

0

New Laws on Data Privacy and Security Are Coming. Is Your Company Ready?


Text to Speech

Executive Summary

Governments are in the process of passing and implementing new laws to ensure higher standards for software security and data privacy. This means the era in which tech companies inadequately test their software for security and privacy vulnerabilities is coming to an end. While these new laws will not be quick in coming — regulations are notoriously slow to adapt to new technological challenges — they are indeed on their way, and software companies and their corporate customers shouldn’t wait to take action. To start with, they should gauge their level of security in terms of not just the patches they install or the incidents they respond to, but also the labor-intensive, ongoing processes they devote to preventing privacy and security vulnerabilities. That means a central metric in securing enterprise data will become time: the time devoted to testing the software that companies create and purchase, and the time devoted to maintaining that software once it is deployed.

Max Oppenheim/Getty Images

On March 26, Jonathan Leitschuh, a 20-something engineer, emailed Zoom to let the company know about a flaw he’d discovered in its software — one that allowed malicious actors to secretly access the cameras of anyone who’d ever used the popular videoconferencing service. It took Zoom nearly three months to resolve the issue. Leaving nothing to chance, Apple released its own software update to remedy the problem.

Three weeks after Leitschuh first emailed Zoom, the company went public in one of the most successful IPOs of 2019 so far. In other words, at the same time that the market was crowning Zoom with financial accolades, a young software engineer was discovering a vulnerability that jeopardized the privacy and security of nearly all of its users.

How could these two realities exist at once? The answer: Companies are not equipped to sell secure software, because they are not incentivized to do so, and consumers on their own are in no position to demand it.

But this market failure will not last long. Governments are in the process of passing and implementing new laws to ensure higher standards for software security and data privacy, meaning that the era in which tech companies inadequately test their software for security and privacy vulnerabilities is coming to an end. Late last year, for example, California became the first U.S. state to enact basic standards for software used in the internet of things. A laundry list of proposed legislation on privacy and security at the state level, along with a host of proposals at the federal level, would raise the penalties for harms caused by privacy or security failures. Similar efforts are ongoing around the world, from India to Brazil and elsewhere.

While these new laws will not be quick in coming — regulations are notoriously slow to adapt to new technological challenges — they are indeed on their way, and software companies and their corporate customers shouldn’t wait to take action. To start with, they should gauge their level of security in terms of not just the patches they install or the incidents they respond to, but also the labor-intensive, ongoing processes they devote to preventing privacy and security vulnerabilities. That means a central metric in securing enterprise data will become time: the time devoted to testing the software that companies create and purchase, and the time devoted to maintaining that software once it is deployed.

This would mark a dramatic departure from all-too-common practices: leaving it to underresourced quality assurance teams to discover flaws and outsourcing testing to largely ineffective bug bounty programs.

This new approach is, in some sense, the unavoidable consequence of our widespread adoption of digital technologies: The more time we spend using software-based systems, the more effort we collectively require to ensure that these systems are secure. That will translate to more privacy and security personnel spending more time and resources securing all the software we use.





Companies that create and deploy software can ready themselves by adopting two strategies.

First, they must focus on embedding security processes into the software design and deployment life cycle as early and as often as possible. There are a number of existing methods they can draw upon to do so. Software vendors can look to examples like the so-called DevSecOps movement — a cousin of the more widely known DevOps — which inserts security personnel directly into the ongoing course of development and operations (hence the name). Companies that purchase software can continuously track their attack surface and ensure personnel such as “red teams,” which simulate attackers, are actively probing their networks and monitoring their security posture.

Regardless of what method they choose, companies will have to demonstrate that security and privacy controls are not simply an afterthought but are a core requirement in and of themselves. Companies will, as a result, be required to carefully track the time and resources spent testing and securing all the software they create or manage.

Second, companies will also need to connect the resources they spend on privacy and security to the volume and complexity of the code they seek to protect. As the number of lines of code in any given software system grows, or as its user base expands, organizations will have to increase their efforts to protect the privacy and security of their users as well.

Tying the intensity of data protection programs to the volume and complexity of underlying security needs is precisely what enacted and proposed laws are calling for, many of which — including the Data Protection Act in Ohio and one proposed bill in New Jersey — mandate concrete, evidence-based, and adaptive data-protection programs. This approach aligns with a large body of research connecting the probability of software defects to the complexity and volume of code.

This type of regulatory response should not come as much of a surprise to the software industry. After major security breaches, for example, regulators already force this approach on some of the worst-offending vendors, requiring highly labor-intensive auditing, testing, and review. This was demonstrated by the settlement that the global networking behemoth D-Link reached in early July with the Federal Trade Commission, in which the former agreed to a process-oriented, audit-heavy “software security program” that will last 20 years. The recent settlement between Facebook and the FTC, which requires a huge new privacy oversight program, is another illustration of this approach.

Soon, the very idea of leaving security testing to the Jonathan Leitschuhs of the world will become more than a public relations liability; it will become a serious legal vulnerability as well.

Source link

Tagged with: • • • •



Comments are closed.