Living off the AI: The Next Evolution of Attacker Tradecraft

The cybersecurity landscape is beginning to move beyond the well known tactic called “Living off the Land” (LotL) where attackers misuse legitimate system tools that already exist inside an environment. A new pattern is emerging. Security researchers now describe it as “Living off the AI” (LotAI). In this model, attackers take advantage of artificial intelligence systems that organizations have integrated into daily operations. Large Language Models (LLMs), automated AI agents and orchestration layers such as the Model Context Protocol (MCP) are designed to improve productivity, automate decisions and connect internal services. Yet those same systems often operate with extensive permissions, trusted access to company data and the ability to trigger actions across multiple platforms. When deployed without strict safeguards, they effectively create a powerful environment where an attacker can influence outcomes, retrieve sensitive information or execute tasks through the AI itself.

The risk grows as companies embed AI deeper into development pipelines, customer service platforms and internal knowledge systems. Unlike traditional attacks that rely on malware or direct exploitation, living off the AI techniques allow adversaries to manipulate prompts, context data or automated workflows to achieve their objectives. A compromised AI agent can query internal databases, generate convincing phishing content or perform automated actions while appearing to operate normally. Because these systems are trusted and often pre authenticated, malicious activity may blend into legitimate operations and remain undetected for longer periods. As enterprise adoption of LLMs and AI driven automation accelerates, security teams must rethink threat models and treat AI infrastructure as a critical attack surface rather than a neutral productivity tool.

The Evolution: From Land to Cloud to Intelligence
To understand the idea behind Living off the AI (LotAI), it helps to look at the techniques that came before it and how attackers gradually adapted to modern technology environments.

  • Living off the Land (LotL): This technique involves abusing legitimate system tools that already exist inside an operating system. Attackers rely on native utilities such as PowerShell, WMI or built in administrative scripts to execute commands, move laterally across systems and maintain persistence without placing obvious malicious files on the disk.
  • Living off the Cloud (LotC): As companies shifted toward cloud infrastructure and SaaS platforms, attackers adapted by exploiting trusted online services. Platforms such as GitHub, OneDrive or cloud storage services can be used as covert channels for command and control communication, malware hosting or data exfiltration while blending into normal enterprise traffic.
  • Living off the AI (LotAI): The newest evolution targets enterprise AI systems themselves. Instead of traditional malware, attackers interact with AI agents, Large Language Model connectors and automation tools that already have permission to browse internal files, query company databases, generate reports or even execute code as part of routine workflows.

In a Living off the AI attack, the “malware” may not exist as compiled code at all. Instead, it can take the form of a carefully crafted natural language prompt or a manipulated context input that subtly directs a trusted AI agent to carry out actions the attacker could not perform directly.

The Technical Anatomy of a LotAI Attack
Modern AI isn’t just a chatbot, it’s an Agentic System capable of using tools. When these agents are connected to internal systems via frameworks like MCP, they gain “hands.” Attackers target these hands in three primary ways:

Tooljacking via MCP (Model Context Protocol)
The Model Context Protocol (MCP) is an open standard that enables large language models to connect with external data sources, internal services and operational tools. Through MCP integrations, an AI agent can interact with databases, file systems, APIs and web services as part of normal workflows. This design allows the model to retrieve information and perform tasks beyond simple text generation. However, when these connections are configured with overly broad permissions, the same functionality can become a security risk.

For example, an attacker might send a phishing email that an AI powered assistant is configured to read and summarize. Inside the message, a hidden instruction is embedded within normal text. The instruction could ask the agent to use a connected SQL tool to query a sensitive database table and transmit the results to an external webhook. Because the agent believes it is responding to a legitimate request, it may perform the action using its own trusted credentials turning the AI system into the mechanism for data exfiltration.

Memory and Retrieval Poisoning
Many enterprise AI agents rely on Retrieval Augmented Generation (RAG) to improve the quality of their responses. Instead of depending only on the base language model, the system retrieves relevant information from internal knowledge bases, document repositories, or vector databases. These sources may include company policies, internal reports, project documents or operational records. When a user asks a question, the AI pulls related data from these stores and uses it as context before generating an answer.

This design creates a subtle security risk if attackers manage to insert malicious content into a location that the AI indexes. Even a single document placed in a shared or public facing folder can influence future responses. By planting misleading instructions or fabricated information inside the knowledge base, an attacker can effectively poison the AI’s memory. For example, a document could contain false payment instructions or an updated bank account number for vendor transfers. When an employee later asks the AI for wire transfer details or billing information, the system may retrieve the poisoned entry and confidently provide the attacker’s financial details as if they were legitimate.

The “Zero Knowledge” Threat Actor
One of the most concerning aspects of the Living off the AI model is how much it lowers the technical barrier for attackers. In earlier stages of cybercrime, a threat actor often needed solid programming skills to build malware, write exploits or develop tools such as credential stealers. With modern language models, that requirement is starting to fade. As security researcher Omer Maor has noted, attackers can guide an AI system into producing harmful code even if they have little technical background themselves.

A common method involves what researchers describe as an immersive world prompt. In this scenario, the attacker frames a request as part of a fictional simulation or a defensive training exercise. The prompt may instruct the model to behave as a security researcher testing malware in a controlled environment. Once the model accepts that context, it may generate detailed instructions or working code that the attacker could not have written independently. In effect, the AI becomes the developer that writes the code, the assistant that refines or obfuscates it and sometimes even the guide that explains how to deploy it.

Why Traditional EDR Fails Here
Traditional Endpoint Detection and Response (EDR) platforms are designed to detect suspicious behavior by analyzing process activity, unusual system calls or known malware signatures. Living off the AI attacks rarely trigger those signals because the underlying activity often looks identical to normal AI driven workflows.

  • Process: python[.]exe or another standard runtime that is commonly used to host AI agents, automation scripts or internal data processing tools making the execution environment appear completely legitimate.
  • Network: Outbound traffic directed toward trusted endpoints such as api[.]openai[.]com, internal AI gateways or a local service port used for model communication which security tools typically classify as approved application traffic.
  • Activity: Routine tasks such as opening a document, parsing a dataset, querying an internal database or generating a summary from a PDF, all of which are expected behaviors for enterprise AI assistants.

To a traditional security monitoring stack, each of these signals represents normal business operations. The real malicious element does not appear in the process or network layer. Instead, it exists in the semantic layer of the prompt or contextual input where carefully crafted instructions quietly steer the AI agent toward unintended actions.

Defending the Neural Perimeter
If AI systems are becoming a new attack surface, organizations need to treat them with the same caution they apply to privileged users or administrative accounts. AI agents often have direct access to data sources, internal services and automated workflows. That level of access means the security controls around them must be deliberate and tightly managed. Hardening the AI stack requires clear limits on what these systems can do and strong oversight of how they interact with connected tools.

  • Strict Tool Scoping: AI agents should never be granted unrestricted or administrative level access to connected systems. Apply the principle of least privilege when configuring MCP connectors or API integrations. If an agent only needs to read files or retrieve specific data, restrict its permissions accordingly and avoid giving it the ability to write, modify or delete information.
  • Prompt Versioning and Guardrails: System prompts should be tightly controlled and version managed to prevent unauthorized modification. In addition, organizations can deploy guardrail models that inspect the output of the primary language model for signs of sensitive data exposure or policy violations. This approach functions as a form of data loss prevention adapted specifically for AI driven systems. 

  • Human in the Loop (HITL): Certain operations should never be executed automatically by an AI agent. Actions that involve financial transfers, deletion of records or changes to access permissions should require explicit human approval. A simple confirmation step from a responsible user can prevent automated workflows from carrying out harmful instructions.
  • Semantic Logging: Traditional logs often record only that a tool was executed. For AI driven systems, it is equally important to capture the reasoning behind that action. Logging the context, prompts and sequence of tool calls allows security teams to detect unusual patterns such as an agent suddenly querying a sensitive database and then transmitting data through an external web request. Centralized monitoring of these logs helps identify suspicious tool chaining before it escalates into a full incident.

Conclusion
“Living off the AI” reflects the next stage in how cyber adversaries adapt to new technology. Attackers have always gravitated toward the tools and systems that organizations trust the most. Today that trust increasingly sits inside AI driven platforms. Language models, automation agents and connected tool frameworks can access internal data, interact with business services and perform tasks on behalf of users. As these systems become more capable, they also become a new point of interest for threat actors who want to influence decisions, retrieve sensitive information or trigger actions through the AI itself.

AI systems should be treated as critical infrastructure, not experimental tools running quietly in the background. Security teams need to apply the same discipline used for any privileged system like restricting permissions, auditing connected tools, monitor prompts and data sources and carefully control how agents interact with internal services. When AI is deployed with that level of oversight, it remains a powerful asset for productivity and innovation rather than an unseen pathway for attackers.