- AI Circle of Trust
- Posts
- Inside the AI Circle of Trust
Inside the AI Circle of Trust
Tumeryk’s AI Circle of Trust is a monthly newsletter for those shaping safe, observable, and trusted AI featuring frontline lessons, real-world case studies, and breaking news.

Shadow AI’s Silent Leak: 77% of Employees Share Sensitive Data via ChatGPT
A new report from eSecurity Planet finds that 77% of employees admitted to uploading or sharing sensitive corporate data in generative AI tools like ChatGPT—creating a major compliance and DLP risk for enterprises.
Agentic AI Is More Risk Than Reward — For Now
The EMEA CISO at Palo Alto Networks warns that agentic AI projects — systems that reason and act autonomously — face a far higher failure rate than many realise. With poor governance, opaque ownership, and unchecked identities, these agents could expose enterprises to data leaks, privilege escalation, and “objective drift.” Governance, identity control, and security should lead deployment, not follow it.
Synthetic Data by GenAI Raises Serious Ethics Risks
New coverage from NIEHS highlights how synthetic data generated by GenAI isn’t just a technical shortcut—it can pose real risks to research integrity and data trust. According to bioethicist David Resnik, fake data may infiltrate scientific workflows unless controlled rigorously.

We’re building out powerful new features to give security and dev teams even deeper control, especially across Agentic AI, coding assistants, and pre-deployment safety.
✅ DNS Logs Observability: See what domains AI tools are hitting
✅ RAG expansion: Now supports Cosmos DB as a document source
✅ Agentic AI support: Secure agents built on Crew and Autogen frameworks
✅ Guardrails Testing Agent: Test policy impact before pushing to production
✅ Shadow AI Chrome Extension: Block unsafe prompts and redirect to enterprise AI
✅ Cursor integration: Auto-detect and guard against unsafe code via coding assistants

One Platform. Four Products. Built for Real-World AI Risk.
We’ve streamlined the Tumeryk platform into four powerful products giving enterprise teams focused control over every layer of GenAI risk:
Real-time enforcement of context-aware policies to stop jailbreaks, bias, and shadow AI across LLM agents.
Simulate adversarial attacks, expose vulnerabilities, and benchmark models, prompts, and guardrails — all mapped to our 9-dimension AI Trust Score™.
Passive monitoring for audit readiness — track prompt behavior, token responses, and model violations, without slowing anything down.
Govern AI usage across teams — with role-based access, prompt visibility, and fine-grained scoring of employee interactions with chatbots and copilots.
Whether you’re building copilots or securing enterprise AI systems, Tumeryk now offers purpose-built tools to score, protect, and monitor your entire AI stack.
AI Trust & Security Glossary
Not sure what prompt injection really means? Confused between observability and guardrails? We’ve just launched the Tumeryk Glossary, a living reference for 50+ terms across AI risk, GenAI governance, and enterprise security. Whether you’re a CISO, developer, red teamer, or just AI-curious, this page helps decode the language of modern AI risk. Updated regularly as the field evolves.