White Paper

Securing the Model Context Protocol: Access, Authorization, 
and Audit for Enterprise AI

Research Finder

Find by Keyword

Securing the Model Context Protocol: Access, Authorization, 
and Audit for Enterprise AI

The rise of Model Context Protocol and enterprise LLM deployment demands a fresh look at AI security.

The rise of enterprise Large Language Models (LLMs) has led to the development of the Model Context Protocol (MCP) to standardize how these models interact with internal systems, accelerating innovation but also introducing complex security risks. Instead of creating new AI-specific security models, organizations should extend existing frameworks like Infrastructure Identity, which combines cryptographic identity, Zero Trust, and governance, to include AI systems. Key security challenges with MCP include deployment architecture risks, authorization gaps like static credentials, and LLM manipulation threats such as prompt injection. Solutions like Teleport enable enterprises to apply consistent, policy-driven controls by treating AI components as first-class identities within established security postures. The report covers four key areas:

  • The Model Context Protocol (MCP) standardizes how LLMs and AI agents interact with external enterprise systems and data sources in a structured way.
  • While MCP fosters innovation, it brings significant security challenges, including over privileged access, expanded attack surfaces, inconsistent policy enforcement, and threats like prompt injection; notably, OAuth was added to the MCP specification on March 26, 2025.
  • Enterprises should secure MCP by integrating AI workloads into existing Infrastructure Identity frameworks, treating LLMs as first-class identities, rather than developing siloed AI security approaches.
  • Recommendations for securing MCP include eliminating static credentials, implementing task-based authorization, ensuring fine-grained access control, mandatory logging and auditing, and continuously testing integrations.

To find out more download the commissioned Research Brief.

Download the full report now

Author Information

Stephanie Walter | Practice Leader, AI Stack

Stephanie Walter is a results-driven technology executive and analyst in residence with over 20 years leading innovation in Cloud, SaaS, Middleware, Data, and AI. She has guided product life cycles from concept to go-to-market in both senior roles at IBM and fractional executive capacities, blending engineering expertise with business strategy and market insights. From software engineering and architecture to executive product management, Stephanie has driven large-scale transformations, developed technical talent, and solved complex challenges across startup, growth-stage, and enterprise environments.