How Trust & Safety Execs Are Managing AI’s Security Threats

At Fortune’s Brainstorm Tech conference on Tuesday, safety officers from major companies discussed the importance of caution when integrating AI tools like ChatGPT into business operations.

Salesforce’s chief trust officer, Brad Arkin, emphasized that while there is high demand for cutting-edge AI services, ensuring these tools do not introduce vulnerabilities is crucial. “Trust is more than just security,” Arkin stated, noting the company’s focus on creating features that benefit users without compromising their interests.

Despite the rapid adoption of AI, it also presents new risks. AI can facilitate criminal activities, such as social engineering scams and phishing emails, by removing language barriers and enabling large-scale attacks.

The longstanding issue of “shadow IT”—employees using unmanaged hardware and software—now extends to “shadow AI,” potentially increasing vulnerabilities if not properly managed. Arkin advised that AI should be treated like any other tool: understanding its risks while leveraging its benefits through proper training.

On the panel, Cisco’s chief security and trust officer, Anthony Grieco, provided practical advice for using generative AI platforms like ChatGPT:

If you wouldn’t tweet it, if you wouldn’t put it on Facebook, if you wouldn’t publish it publicly, don’t put it into those tools.”

The widespread use of AI necessitates a rethinking of IT strategies. A working paper from the National Bureau of Economic Research in October highlighted the rapid AI adoption across the U.S., especially among large firms, with over 60% of companies with more than 10,000 employees using AI.

Wendi Whitmore, senior vice president at cybersecurity giant Palo Alto Networks, stressed the need for employees to be vigilant about potential phishing and related attacks, as cybercriminals are well-versed in business operations and vendor interactions. “You can be concerned about the technology and put some limitations around it,” Whitmore said. “But the reality is that attackers don’t have any of those limitations.”

Accenture’s global security lead, Lisa O’Connor, underscored the importance of “responsible AI,” advocating for governance principles to guide AI adoption. She noted that Accenture has embraced large language models, including developing a custom-trained LLM.

As AI tools become more integrated into business processes, companies must balance innovation with security, ensuring that new technologies enhance operations without introducing new risks.