OpenAI: Using ChatGPT to Make Fake Social Media Posts Backfires on Bad Actors
OpenAI claims cyber threats are easier to detect when attackers use ChatGPT.
Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report, called Influence and cyber operations: an update. The report analyzes emerging trends in how AI is currently amplifying online security risks.
Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI …
Keep reading with a 7-day free trial
Subscribe to Neural News Network to keep reading this post and get 7 days of free access to the full post archives.