Agent Security
How we securely deploy AI agents following Anthropic's best practices.
Security First
Our agents are deployed following Anthropic's official security guidelines for AI agent deployments.
Overview
AI agents are powerful tools that can interact with external services, process content, and take actions on behalf of users. Unlike traditional software that follows predetermined code paths, agents generate their actions dynamically based on context and goals.
This flexibility is what makes them useful, but it also means we need thoughtful security controls. MakersLounge implements defense in depth to ensure agents operate safely.
Security Principles
Our agent security follows three core principles:
Least Privilege
Agents only have access to the capabilities required for their specific task. They cannot access resources or perform actions outside their defined scope.
Defense in Depth
Multiple layers of security controls work together. If one layer is bypassed, others provide protection. This includes authentication, authorization, validation, and monitoring.
Transparency
Users can see exactly what security measures are in place. Each agent displays a security badge with details about the protections applied.
Implemented Security Controls
Every agent in MakersLounge has the following security measures:
| Control | Description | Protection |
|---|---|---|
| Active | API Authentication | Session-based auth via Supabase. Only authenticated users can trigger agents. |
| Active | Admin-Only Execution | Certain agents (like AI News) restricted to verified admin accounts only. |
| Active | Execution Limits | Maximum turns per run (15-20) prevents runaway processes and controls costs. |
| Active | Cron Secret | Scheduled jobs use separate secret tokens, never exposed to browsers. |
| Active | Credential Isolation | API keys stored server-side only. Never exposed to client code. |
| Active | Database RLS | Row-Level Security policies restrict what data agents can access/modify. |
AI News Agent Security
The AI News Agent has additional security considerations because it fetches external content and posts to the feed.
Prompt Injection Mitigation
External content is processed through summarization rather than passed directly, reducing the risk of malicious instructions in web content affecting agent behavior.
Multi-Agent Architecture
Researchers, curators, and writers are separate subagents. This separation of concerns means no single agent has end-to-end control over the pipeline.
Turn Limits
Maximum 15 turns per execution. If the agent doesn't complete within this limit, it terminates gracefully rather than running indefinitely.
Authentication Flow
Here's how authentication works when triggering an agent:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ User clicks │ │ API receives │ │ Agent runs │
│ "Run Agent" │────▶│ request with │────▶│ with verified │
│ in browser │ │ session token │ │ credentials │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Supabase Auth │
│ verifies token │
│ + checks email │
└─────────────────┘For scheduled (cron) jobs, a separate flow is used:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Vercel Cron │ │ API receives │ │ Agent runs │ │ triggers at │────▶│ request with │────▶│ with service │ │ scheduled time │ │ CRON_SECRET │ │ credentials │ └─────────────────┘ └─────────────────┘ └─────────────────┘
Threat Model
We protect against the following threats:
| Threat | Mitigation |
|---|---|
| Unauthorized execution | Session-based auth + admin email verification |
| API abuse / cost attacks | Turn limits, admin-only access, rate limiting |
| Prompt injection | Content summarization, output validation (planned) |
| Credential exposure | Server-side only, never in browser code |
| Data exfiltration | Limited tool access, database RLS policies |
| Runaway processes | Max turns, execution timeouts |
Security Badge
Each agent page displays a security badge that users can click to view the security measures in place:
Best Practices for Users
- Keep your session secure — Log out when using shared devices
- Review agent output — Verify AI-generated content before sharing
- Report issues — If you notice unexpected agent behavior, contact us