Skip to Main Content
All Trimarc services are now delivered through TrustedSec! Learn more

Large Language Model (LLM) Assessment

Simulating AI Attacks and Providing Strategic Roadmaps

AI Penetration Testing Services

Artificial Intelligence (AI) can be used to do many helpful things in everyday life but what happens when it’s prompted to perform a task that attacks security systems? TrustedSec is excited to offer an LLM Assessment service to fill in any gaps that might be overlooked in a traditional penetration test. By using a manual testing process, we provide strategic recommendations to home in on the foundation of the issue and correct the cause of exposure.

Methodology

TrustedSec's Large Language Model (LLM) Assessment is a primarily manual approach targeting the technical security challenges presented by LLMs. The assessment focuses on LLM interactions with web application and web API components.

TrustedSec's approach is to gain an understanding of the attack surface. Manual investigation of the application is performed to identify what inputs the LLM or machine learning (ML) system relies on, including training data, prompts, and reference resource inclusion. TrustedSec continues the discovery process by investigating API interaction and in-band and OOB response behavior and then searches for exposures related to connected business logic and information disclosures. The manual testing process is informed by NIST AI 600-1, and the OWASP LLM Top 10 typically includes, but is not limited to, the following:

  • Data poisoning or attempts to inject malicious data into the training or retraining to attempt to control model outputs
  • Model inversion or use of response correlation to derive sensitive data that may have been present in system prompt inputs or training data
  • Discovery of confidentiality violations through adversarial prompting
  • Attempting to trigger authorization violations through adversarial prompting, including resource manipulation, targeted attacks leveraging excessive agency, and API integration weaknesses
  • Indirect prompt and inline markup attacks, which attempt to insert attacker-controlled tokens via included resources that will be treated as part of a victim's prompt inputs
  • Evasion attacks, sometimes referred to as jailbreaks, that include the crafting of inputs specifically designed to bypass model restrictions, such as generating negative content about first-party products and services
  • Request forgery and related web technology attacks designed to identify weaknesses in API integrations

Strategic Recommendations

Organizations typically focus efforts on the Technical Findings from an LLM Assessment. Although Technical Findings are important, fixing the root cause of the issue is a crucial outcome of the assessment. If the root cause of an exposure can be corrected, the longer-term security program will be significantly more successful than if only the Technical Findings are remediated. TrustedSec outlines the root causes in the form of Strategic Recommendations. Where applicable, the Strategic Recommendations outline long, mid, and short-term actions to provide a roadmap for remediating the underlying root cause.

Get Started with an LLM Assessment

Ready to see how your defenses hold up against Artificial Intelligence? Let's talk.

Our research on Artificial Intelligence

TrustedSec experts provide up-to-date information, tools, and tips on the latest in AI.

Podcasts June 16 2025

Security Noise - Episode 7.18

This week on Security Noise, we are hacking with AI! Listen as we discuss how AI can accelerate workflows, the pros and cons of using automation in penetration…

Read about this article
Blog June 12 2025

Hunting Deserialization Vulnerabilities With Claude

In this post, we are going to look at how we can find zero-days in .NET assemblies using Model Context Protocol (MCP).SetupBefore we can start vibe hacking, we…

Read about this article
Blog June 03 2025

Teaching a New Dog Old Tricks - Phishing With MCP

As AI evolves with MCP, can a new “dog” learn old tricks? In this blog, we test Claude AI’s ability to craft phishing pretexts—and just how much effort it…

Read about this article
Blog March 28 2025

MCP: An Introduction to Agentic Op Support

1.1      IntroductionAgents and Large Language Models (LLMs) offer a powerful combination for driving automation. In this post, we’ll explore how to implement…

Read about this article
Blog February 18 2025

Getting Started Using LLMs in Application Testing With an MVP

Are you interested in incorporating Large Language Models (LLMs) into app tests yet lack the tooling to get you there? This blog walks through how to start…

Read about this article
Blog February 11 2025

From RAGs to Riches: Using LLMs and RAGs to Enhance Your Ops

1.1      IntroductionIn this blog, I will explore Retrieval-Augmented Generation (RAG) and how it can be applied to research capabilities. RAG is a framework…

Read about this article