At today’s Workday Federal Forum in Washington, D.C., Federal and industry artificial intelligence (AI) experts discussed how government agencies are navigating the “double-edged sword” that is AI.

According to the panelists, Federal agencies are increasingly turning to AI because it holds the promise for streamlined processes, data-driven decision-making, and enhanced citizen services. Yet, despite its transformative potential, AI is a double-edged sword, and Federal agencies need to navigate and harness the power of AI while safeguarding against its pitfalls.

Ryan Higgins, the acting chief information officer and chief artificial intelligence officer for the Department of Commerce, explained that a key factor in navigating the duality of AI is clear messaging and communication.

“Acknowledge that there is a lot of excitement around AI, but there is also a lot of anxiety,” Higgins said.

He said recognition of this fact and relaying messages to the workforce on how to manage that anxiety and excitement calms apprehensions and dissuades unnecessary risk in leveraging AI for operational use.

Dr. Cyril Taylor, chief technology officer for communications systems at the U.S. Special Operations Command, said that a key factor in navigating the duality of AI is giving the workforce a space to grow into the tech.

“We need the time, we need the space, and we need the grace to go out there and try our best and run down the hall with scissors and fall and get fixed and get picked back up, and then realize how to fix it,” Taylor said.

“You have to grow into that. So, we have to come up with the best plan based [on] the information we have today, and we have to have the ability to iterate and change over time and continue to course correct,” he added.

Higgins echoed Taylor’s comments, adding that part of growing into the technology is “educating the workforce on where we want to go with AI.”

Additionally, Taylor explained that to navigate the duality of AI – especially at the Pentagon – agencies need to follow three steps.

Step one: security.

Step two: find a relevant need or use case for the AI solution.

Step three: back to security.

“Before we can get to AI, it requires a security-first mindset … Then we need to find relevance in how we’re going to implement AI solutions. Is that AI just there as a trope … or is it something that I can gain and articulate direct value? Identify the use cases … then trying to use AI to make our lives easier … And once you have that AI tool, what is the identity of their security?” Taylor said.

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags