Skip to Main Content
May 07, 2026

GRC in an AI World - Staying in the Fast Lane Without Losing the Race!

Written by Stephanie Saunders
Artificial Intelligence (AI) Risk Assessment

Artificial Intelligence (AI) is the new buzz word on the streets. It’s becoming “the best thing since sliced bread” in the IT world and is being used by everyone from executives to employees, students, and even young kids. Some of the top questions we get in consulting are, “How can we leverage AI?” and “How do we stay secure?” For those in Governance Risk and Compliance (GRC), it’s “What’s the risk?”

Just like in Talladega Nights, many companies want to be the Ricky Bobby’s of IT and use AI to say “I wanna go fast” to improve productivity and maintain that cutting edge. However, GRC is a bit different from the SDLC or marketing world where AI is used more prevalently.

While there is no secret sauce to security or compliance, the answer is often the simplest: Go back to basic cyber hygiene. Whether it’s driving a NASCAR race car or a smart car, basic components remain. Regardless of if your organization is a small startup or a large-scale corporation, security and compliance practices have not changed drastically. The standards are updated but day-to-day best practices haven’t evolved as much as you might think, but they do require a tune up now and then. Here’s some simple reminders to help “Shake and Bake” and stay compliant all while using AI. Fasten your seatbelts!

Dos - Prepping for the track:

  • Perform an AI risk assessment or impact assessment prior to implementation and map use cases to business needs. Examples include:
    • What is the risk if AI is or is not implemented?
    • What problem or use case does it help the company solve?
    • Would it be more productive for identified scenarios or would it hinder progress?
    • What impact would implementation have on relationships with third-parties or customers the company does business with?
  • Evaluate risks identified, such as data leakage, insecure output, system compromise, or third-party risk.
  • Develop policies and procedures around AI toolsets and usage and communicate requirements to all users and interested parties.
  • Be aware of industry-specific restrictions prior to implementation.
    • Does the company have specific regulatory and compliance needs it must adhere to such as, HIPAA, PCI, GDPR, etc. that would prohibit access to sensitive data or require “minimum exposure” requirements for protection against compliance violations?
  • Inventory and classify all data within the environment.
    • Specifically, what data will be collected and processed for AI specific tasks.
  • Train all users on what data is permitted and not permitted to use with AI toolsets.
    • Prohibited use could include customer data, internal company data, and data that could potentially include personally identifiable information (PII) or intellectual property (IP).
    • Permitted use could include summarizing publicly available documents or frameworks for understanding or aggregating data sets that do not include customer or sensitive information.
  • Use only authorized “closed” AI systems with enterprise-level protections.
  • Ensure proper technical controls are deployed to allow “minimum exposure”, especially if using sensitive data sets or unstructured data.
  • Maintain human oversight for all output.
    • This can be critical for coding, policies, procedures, playbooks, and legal documents, as many legal documents must now include AI addendums to ensure humans have reviewed the output for accuracy.
  • Verify and validate AI models for performance.
  • Document output, specifically when, how, and approved usage of AI technology within the environment. Ensure accountability and transparency between the system and user.
  • Be able to interpret decisions or output made by AI toolsets, using human oversight, specifically for development or intellectual property. Explainability is essential for functionality and impact to systems.
  • Log AI tools when able.
    • Treating all toolsets in the environment the same way as regular software should apply to AI. From an IR and GRC perspective all actions should be tied back to a specific user or system and logged.
  • Setup alerting for unauthorized use or data loss.
  • Maintain safety of AI systems and continue to perform impact assessments and improvements.

Avoid getting disqualified! Here’s some key items to prevent crashing and burning down your compliance department and being removed from the race.

DQs - Don’t get caught cutting corners:

  • Deploy toolsets without developing, updating, and distributing policies and procedures on proper AI usage for both work and home if remote work is in-scope. Examples include:
    • Use of AI recording during meetings internally and externally without proper consent.
    • Relying on meeting translations to create company documents or reports.
    • Dismissing the use of AI tools while working at home, e.g., artificial intelligence of things (AIoT) such as smart home devices (Amazon Echo or Google Nest), home security devices, wearables (smart watches), etc.
    • Integrating Industrial IoT (IIoT) devices or sensors for company use without proper testing or approval.
  • Omit risk assessment or impact assessments processes prior to deployment.
  • Permit toolset deployment without proper data classification and technological controls in place to protect accidental data disclosure.
  • Allow Shadow AI such as applications, SaaS tools, browser extensions, or open-source AI technology.
  • Share confidential, sensitive, or privileged information with public AI tools.
    • Ensure that information is only shared with prior authorization in company approved toolsets.
  • Forget to setup logging and monitoring.
  • Allow output from AI to be used externally or in a production setting without proper human oversight and following of system lifecycle review practices.
  • Set it and forget it, continue monitoring, improving, and preventing AI risks.

AI is the new exciting and rapidly growing technology, not just security, programming, or IT, but also for areas of public facing business such as marketing. There are many pros and cons to using these tools to be more productive but ensuring the right safety protocols are implemented will ensure a safe and fun race to the finish line for whatever project is chosen. Being prepared and knowledgeable on industry best practices will ensure the organization has the right pit crew for the job!

For more information the following resources can assist with preparing for race day!

For assistance with Risk Assessments, Program Development, or other questions regarding AI integration with NIST or ISO, please reach out to see how TrustedSec can assist your business’s needs.