Homepage Technology The five AI security threats that moved from theory to...

The five AI security threats that moved from theory to reality in 2025

Computer,Scientist,Safety,Cybersecurity,Expert,Sikkerhed,Hacker,Hacked,Hacking,Cybercrime,Security specialist
Shutterstock.com

Across 2025, researchers documented a series of AI-related security risks, some already exploited in the wild, others demonstrated in controlled attacks.

Others are reading now

Artificial intelligence promised a leap in productivity this year, particularly as agentic systems began creeping into everyday business workflows. But the speed of adoption also exposed a growing attack surface—one that security researchers say many organizations were unprepared to defend.

Across 2025, researchers documented a series of AI-related security risks, some already exploited in the wild, others demonstrated in controlled attacks. Together, they paint a picture of an ecosystem racing ahead of its defenses.

Shadow tools spread

One of the most immediate risks came not from advanced attacks, but from everyday behavior inside companies. Employees increasingly adopted AI tools without approval, often without understanding how their data was stored or processed.

Surveys cited by researchers showed that nearly half of employees in the US and UK used unsanctioned AI tools, while many lacked basic awareness of data handling practices. According to Orca Security’s 2025 State of Cloud Security report, 84% of organizations now use AI tools in the cloud, and 62% had at least one vulnerable AI package deployed.

The Cloud Security Alliance separately reported that one-third of organizations experienced a cloud data breach involving an AI workload, frequently tied to misconfigurations or weak authentication.

Also read

Bugs in trusted AI software

Even widely used AI platforms were not immune. Throughout the year, researchers disclosed and, in some cases, observed active exploitation of vulnerabilities in popular AI frameworks.

These included remote code execution flaws in open-source tools such as Langflow and Ray, weaknesses in OpenAI’s Codex CLI, and vulnerabilities affecting inference servers from Nvidia, Meta, Microsoft, and open-source projects like vLLM and SGLang. The findings underscored how quickly AI tooling has become part of critical infrastructure.

Poisoned supply chains

AI development pipelines themselves also became targets. Researchers at ReversingLabs reported discovering malware hidden inside AI models hosted on Hugging Face, as well as trojanized Python packages masquerading as legitimate AI SDKs.

In both cases, attackers abused Python’s Pickle serialization format, commonly used with PyTorch models, to conceal malicious code. The incidents highlighted the growing risk of supply chain attacks aimed directly at AI developers.

Stolen AI credentials

Another emerging threat involved the theft of credentials used to access large language models through paid APIs, a practice researchers dubbed “LLMjacking.”

Also read

Microsoft filed a civil lawsuit in 2025 against a group accused of stealing LLM credentials and reselling access to other criminals. Researchers warned that abuse of high-end models could generate costs exceeding $100,000 per day for victims whose credentials were compromised.

Prompts turned against systems

Prompt injection attacks remained one of the most pervasive AI-specific risks. Because language models do not reliably distinguish between instructions and data, malicious text embedded in emails, documents, or web pages can be interpreted as commands.

Researchers demonstrated such attacks across coding assistants, AI agents, browsers, and chatbots, including products from GitHub, Google, Microsoft, OpenAI, Salesforce, and Anthropic. In the worst cases, attackers could trigger data exfiltration or misuse of connected tools.

The MCP problem

Finally, researchers raised alarms about the rapid spread of Model Context Protocol servers, which allow AI systems to interact with external tools and data sources.

With tens of thousands of MCP servers now online, security teams warned that malicious or poorly secured servers could enable code injection, prompt hijacking, or unauthorized access. Demonstrations showed how rogue MCP servers could compromise development environments by injecting malicious browser code.

Also read

Sources: CSOOnline, Orca Security, OpenAI

Ads by MGDK