AI/LLM security
AI and LLM-based applications are prone to both traditional vulnerabilities and more AI-specific vulnerabilities. Of the traditional vulnerabilities, AI’s can be manipulated to run arbitrary code, cause DoS, leak sensitive data and even carry out SQL Injection attacks. Of the AI-specific vulnerabilities, AI can be controlled by adversaries to hallucinate or promote political and ideological views. If you plan to roll out an AI-based application, are you sure an adversary cannot manipulate it in such a way that it will convince all of your users into switching to a competitor's product? Are you sure that the model does not have excessive agency and can leak sensitive user data? And are you sure that your LLM will not tell users how they can hack your system based on the information it has about your infrastructure and system internals?
We audit the entire AI/LLM software stack
Agents

Agents are exciting new technology, however, for them to be efficient and powerful, they often need to import data from remote sources. This is an attack surface that adversaries will attempt to gain access to and compromise the agent. Simultaneously, agents are introspective state machines, and a common threat to agents is the ability to manipulate the direction that agents develop their understanding of a topic or the way they intend to solve a problem. Successful attacks against agents can lead to many types of compromise of both traditional and AI-specific impact.

Model training

Training involves many steps, multiple modules, multiple data sources and many users with permissions to specific parts of the training process. We see many supply-chain-style attack vectors in this part of the AI/LLM stack where an attacker can manifest themselves in one part of the training process and escalate their privileges to other parts. Data tainting and information disclosure are other problems that many training systems are prone to. We can help you threat model your training infrastructure and audit for security vulnerabilities and risks.

Image, text and code generation

LLMs are efficient in generating images, text and code from a single prompt. Often, the user will take the generated image, text or code and pass it on to another system that will consume the generated output or execute the code. What if an adversary could get a hand into the workflow? Either when you create and be able to control the output, or between generation and you receiving it? In the first case, the adversary could generate malicious code that opens a shell when you run it. In the second case, the adversary could wait for you to review the AI-generated output, and then when you confirm, the adversary could replace it with malicious, harmful data.

AI applications

We are available for auditing AI applications whether they are exposed to untrusted data or you use them internally in your organization. We have experience in auditing specific applications as well as micro-service infrastructure where you deploy your application separately from your model and serve your model with exposed endpoints for your application.

Our AI/LLM security services
Threat modelling

We can threat model your AI/LLM infrastructure and applications to identify attack surface, threat actors, trust zones and trust flow and security controls.

Attacking your infrastructure

Ada Logics can take an attackers perspective and attempt to steal your data, damage your application, infiltrate your infrastructure and compromise your users in a controlled environment.

Manual auditing

We can manually audit your AI/LLM infrastructure and applications for risks, misconfigurations, risks and vulnerabilities.

Automated testing

We use state-of-the-art open source dynamic and static analysis to support our audits.

One-time audits

We can help with auditing your infraastructure and applications as a one-time engagement.

Regular audits

We are available for regular AI/LLM audits such as yearly or half-yearly engagements. Code changes over time, and vulnerabilities can get introduced. Catch them with yearly checkups. Alternatively, a yearly infrastructure audit helps your eradicate easily-exploitable issues and find deeper security issues over time.

Security engineering and hardening

We can work with you on hardening the security of your AI/LLM infrastructure and applications.

Talk to us now about your AI/LLM security audit
Contact Us