AI Agent Use Policy
Effective Date: February 2025
Owner: Roger Kirkness (CEO), with support from Thomas Gorham and Adam McCabe
Applies to: All Convictional employees
Purpose
AI agents can accelerate our work, but they also introduce risks. Unlike traditional software, agents can take autonomous actions—writing code, sending messages, accessing systems—sometimes in unexpected ways. This policy establishes clear boundaries so we can use these tools safely and responsibly.
You are responsible for everything an agent does on your behalf.
Definitions
Autonomous action: Any system write action an agent takes without explicit per-action approval from you—committing code, sending emails, modifying files, accessing APIs.
Approved Tools
Use these approved tools for AI-assisted work:
- Gemini (Preferred) — Use through the web app at gemini.google.com.
- Claude Code — Use through the CLI or Claude Desktop.
- Claude Cowork - Use through Claude Desktop.
- Agentic IDEs - Cursor, OpenCode and Copilot in VSCode are acceptable.
- Self built agents - Acceptable with human approval of write actions.
Before introducing any new AI agent or tool, obtain approval from the CEO over email. This includes browser extensions, IDE plugins, automation tools, or any software that uses AI to take actions on your behalf. If it takes longer than 24 hours to get approval, you can experiment with the tool.
Prefer Google tools where possible. Avoid OpenAI tools where possible. Local experimentation of self developed agents is fine, but be conservative with granting write permissions, credential access and internet access.
Approved Use
You may use AI agents for:
- Writing, reviewing, and debugging code
- Drafting documents, emails, and other written content
- Research and information gathering
- Data analysis and summarization
- Brainstorming and ideation
All agent-generated work must be reviewed by you before it affects customers, colleagues, or production systems.
Prohibited Use
The following uses of AI agents are not permitted:
OS-level agents with credential access — Do not run agents with autonomous write actions that can access your system credentials, passwords, API keys, or authentication tokens in the same thread as write access to the public internet.
Agents acting as you — Do not allow agents to impersonate you or take actions under your identity without your explicit, per-action approval.
Agents in sensitive communication tools — Do not connect agents with write access to Signal, Slack DMs, personal email, or any channel where they could send messages as you.
Unsupervised customer-facing actions — Do not allow agents to communicate with customers, modify customer data, or access production systems without human review of each action.
Agents with access to other employees' data — Do not use agents to access, analyze, or act on information belonging to colleagues without their knowledge and consent.
Circumventing security controls — Do not use agents to bypass authentication, access controls, or audit mechanisms.
Human Oversight Requirements
- Review all agent output before committing code, sending communications, or taking any action that affects others
- Do not approve actions you do not understand
- If an agent behaves unexpectedly, stop it immediately
- When in doubt, ask a colleague or escalate to compliance@convictional.com
Data Handling
Our SOC2 Type II commitments apply to agent use:
- Do not share customer PII, credentials, or confidential business data with external AI services unless explicitly approved
- Treat agent conversations as potentially logged and reviewable
- Use approved, enterprise versions of AI tools where available
Why This Matters
AI agents optimize for completing tasks. Without proper constraints, this can lead to harmful outcomes—an agent might take shortcuts that compromise security, send messages you didn't intend, or escalate conflicts inappropriately. Real incidents have occurred where agents attempted manipulation or deception to achieve goals. We take this risk seriously.
Incident Reporting
If an agent takes an unexpected or harmful action:
- Stop the agent immediately
- Document what happened
- Email compliance@convictional.com within 24 hours
No one will be penalized for reporting incidents in good faith. We need visibility to improve our practices.
Questions
If you're unsure whether a particular use of an AI agent is appropriate, ask before proceeding. Email compliance@convictional.com.
This policy will be reviewed quarterly and updated as the technology and our understanding evolves.