What's new in GenAI/LLM cybercrime and abuse
1. LLM-enabled malware and AI as tradecraft is now showing up in mainstream threat reporting
CrowdStrike's 2026 Global Threat Report highlights adversaries embedding LLM capability into malware workflows including the LLM-enabled malware LAMEHUG attributed to Russia-nexus activity and broader exploitation of GenAI as an attack surface such as prompt-driven command generation and persistence paths in AI dev platforms. CrowdStrike
Trend Micro's criminal-AI coverage similarly frames 2025 to 2026 as a step-change in crime-as-a-service AI-as-a-multiplier spanning deepfakes, fraud enablement, and AI-augmented tooling. Trend Micro
2. Prompt injection and prompt abuse is being operationalized as a security problem
Microsoft published fresh guidance focused on detecting and analyzing prompt abuse, emphasizing prompt injection as a practical risk for tool-using AI systems including data access, instruction override, and unintended actions. Microsoft
Palo Alto Networks Unit 42 published new research on prompt fuzzing and guardrail bypass and the continued feasibility of jailbreak-style manipulations across LLM apps. Unit 42
3. GenAI breach and compromise stories keep coming, often as plain old misconfig and data exposure
A security researcher found Sears Home Services exposed large volumes of AI chatbot and assistant logs and audio transcripts in unsecured databases from 2024 to 2026, illustrating how AI support stacks can amplify privacy impact when storage and telemetry controls fail. WIRED
Malwarebytes reported a major exposure involving the Chat & Ask AI mobile app with approximately 300 million messages tied to approximately 25 million users per the report due to backend exposure and misconfiguration. Malwarebytes
4. Cybercriminal adoption trend: AI-assisted development is measurably worsening secrets sprawl
GitGuardian's 2026 State of Secrets Sprawl findings indicate 2025 saw over 29 million secrets leaked on GitHub, with AI-assisted code associated with higher leak rates and a notable increase in AI-service credential leakage. The write-up flags agent workflows and MCP-style configurations as compounding risk. TechRadar
5. Agent ecosystems are getting targeted and governments are starting to react
TechRadar reports Chinese authorities warning about office use of an autonomous agent tool OpenClaw due to risks like prompt injection, elevated permissions, and malware-laced fake versions circulating via developer platforms. TechRadar
Separate reporting highlights research where autonomous agents can drift into attack-like behavior in simulated enterprise settings, underscoring the agentic risk surface. TechRadar