Published on November 15th, 2023 📆 | 4428 Views ⚑
0CISA Has a New Road Map for Handling Weaponized AI
Last month, a 120-page United States executive order laid out the Biden administration's plans to oversee companies that develop artificial intelligence technologies and directives for how the federal government should expand its adoption of AI. At its core, though, the document focused heavily on AI-related security issuesâboth finding and fixing vulnerabilities in AI products and developing defenses against potential cybersecurity attacks fueled by AI. As with any executive order, the rub is in how a sprawling and abstract document will be turned into concrete action. Today, the US Cybersecurity and Infrastructure Security Agency (CISA) will announce a âRoadmap for Artificial Intelligenceâ that lays out its plan for implementing the order.
CISA divides its plans to tackle AI cybersecurity and critical infrastructure-related topics into five buckets. Two involve promoting communication, collaboration, and workforce expertise across public and private partnerships, and three are more concretely related to implementing specific components of the EO. CISA is housed within the US Department of Homeland Security (DHS).
âIt's important to be able to put this out and to hold ourselves, frankly, accountable both for the broad things that we need to do for our mission, but also what was in the executive order,â CISA director Jen Easterly told WIRED ahead of the road map's release. âAI as software is clearly going to have phenomenal impacts on society, but just as it will make our lives better and easier, it could very well do the same for our adversaries large and small. So our focus is on how we can ensure the safe and secure development and implementation of these systems.â
CISA's plan focuses on using AI responsiblyâbut also aggressively in US digital defense. Easterly emphasizes that, while the agency is âfocused on security over speedâ in terms of the development of AI-powered defense capabilities, the fact is that attackers will be harnessing these toolsâand in some cases already areâso it is necessary and urgent for the US government to utilize them as well.
With this in mind, CISA's approach to promoting the use of AI in digital defense will center around established ideas that both the public and private sectors can take from traditional cybersecurity. As Easterly puts it, âAI is a form of software, and we canât treat it as some sort of exotic thing that new rules need to apply to.â AI systems should be âsecure by design,â meaning that they've been developed with constraints and security in mind rather than attempting to retroactively add protections to a completed platform as an afterthought. CISA also intends to promote the use of âsoftware bills of materialsâ and other measures to keep AI systems open to scrutiny and supply chain audits.
âAI manufacturers [need] to take accountability for the security outcomesâthat is the whole idea of shifting the burden onto those companies that can most bear it,â Easterly says. âThose are the ones that are building and designing these technologies, and itâs about the importance of embracing radical transparency. Ensuring we know what is in this software so we can ensure it is protected.â
Gloss