All

Here are dozens of ways AI could be used for harm — and some too scary to test

Since ChatGPT’s release last year, the Twitterverse has done a great job of crowd-sourcing nefarious uses for generative AI. New chemical weapons, industrial-scale phishing scams — you name it, someone's suggested it.

But we’ve only scratched the surface of how large language models (LLMs) like GPT-4 could be manipulated to harm

Continue to Sifted
Strategy & Investments