- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Summary
This research, conducted by Microsoft and OpenAI, focuses on how nation-state actors and cybercriminals are using large language models (LLMs) in their attacks.
Key findings:
- Threat actors are exploring LLMs for various tasks: gathering intelligence, developing tools, creating phishing emails, evading detection, and social engineering.
- No major attacks using LLMs were observed: However, early-stage attempts suggest potential future threats.
- Several nation-state actors were identified using LLMs: Including Russia, North Korea, Iran, and China.
- Microsoft and OpenAI are taking action: Disabling accounts associated with malicious activity and improving LLM safeguards.
Specific examples:
- Russia (Forest Blizzard): Used LLMs to research satellite and radar technologies, and for basic scripting tasks.
- North Korea (Emerald Sleet): Used LLMs for research on experts and think tanks related to North Korea, phishing email content, and understanding vulnerabilities.
- Iran (Crimson Sandstorm): Used LLMs for social engineering emails, code snippets, and evading detection techniques.
- China (Charcoal Typhoon): Used LLMs for tool development, scripting, social engineering, and understanding cybersecurity tools.
- China (Salmon Typhoon): Used LLMs for exploratory information gathering on various topics, including intelligence agencies, individuals, and cybersecurity matters.
Additional points:
- The research identified eight LLM-themed TTPs (Tactics, Techniques, and Procedures) for the MITRE ATT&CK® framework to track malicious LLM use.
You think Microsoft is the only organization capable of producing these tools? They weren’t even the first.
That is true. Still, huge big tech companies are the biggest threat actors