Security in an AI world—Inflection AI panel insights
Photo credit : Amit Manjhi

Security in an AI world—Inflection AI panel insights

by Debabrata Dash , Amit Manjhi , Shruti Prakash

Inflection AI, in partnership with Key.ai, recently hosted an engaging panel discussion on LLM security at their office. The panel featured Huseni Saboowala (Daxa), Ritesh Ahuja (Wald), and Inflection AI’s own Prasad Vellanki and Balraj Singh, and was moderated by Kapil Chhabra from WisdomAI.

The main topic of discussion was data leakage both externally and across organizational boundaries, stemming from concerns that LLM providers might inadvertently use user-uploaded data for training purposes—as Amazon previously discovered. Although some providers have addressed this issue, enterprises continue to seek stronger assurances by implementing additional layers of security around LLM usage, highlighting why this remains the #2 issue on the AI OWASP Top-10, just below #1 Prompt Injection.

Here are our key takeaways from the panel discussion:

1. End-to-End Encryption: The Key to Prioritizing Privacy with Custom Assistants

For security-sensitive enterprises, balancing security and functionality is critical. One effective strategy is end-to-end encryption for custom AI assistants. While this approach introduces friction—such as limited shareability and the need to encrypt/decrypt data per user—it ensures that no data crosses user boundaries unencrypted, maximizing privacy and control.

2. RAG: The Key to Contextual Security in Enterprise Access Control

While contextual extraction becomes less crucial for public data, the expansion of context windows—now reaching up to 10 million tokens—has surfaced unique challenges for enterprise solutions. In these environments, strict access controls are paramount, restricting context retrieval to only user-authorized documents. This limitation underscores the irreplaceable role of embedding access control mechanisms directly within RAG systems.

3. Data Semantics: The Key to Unifying Access Controls for Unstructured Data

Propagating access control to AI systems can be achieved in multiple ways. While centralized permissions in RAG systems are critical, replicating access controls for unstructured data from other systems is often poorly managed, leading to potential data leaks. By employing semantic analysis and cross-referencing content with intended user permissions, organizations can prevent unauthorized access and maintain data integrity.

4. Query Adjustment: The Key to Implementing Multi-Layer Control for Structured Data

Structured data often demands more granular access controls than what native database mechanisms provide, where user roles are typically coarse-grained. To address this, access policies must be enforced at the application layer. A powerful technique is query rewriting—injecting additional predicates into SQL queries to enforce row- and column-level constraints. Though complex to manage, this approach bridges the gap between detailed application-level permissions and database limitations, increasing protection fidelity.

5. Agentic Systems: The Key Frontier in AI Security

As trust in AI assistants grows, agentic systems—where AI agents perform autonomous actions—present a new frontier in security. Unlike hallucinations, which are easier to detect and contain, autonomous behavior can have significant, unpredictable consequences. The challenge is exponentially more complex than LLM output control. Active research into using restrictive DSLs to constrain agent behavior is paving the way toward securing agentic systems in an evolving AI landscape.

While trust in the security of AI assistants grow, agentic systems where AI agents perform autonomous actions, present a new and complex frontier in AI security. 

6. On-Prem LLMs: The Key to Data Sovereignty

Offering LLMs that run on customers' private data centers or within VPCs ensures that data remains fully under customer control, eliminating concerns about external access or data leakage. Additionally, carefully curating training data reduces the risk of model poisoning. For large enterprises that prioritize both cost efficiency and security, this strategy is foundational to achieving true data sovereignty.

To view or add a comment, sign in

More articles by Inflection AI

Others also viewed

Explore topics