Our Charter
Guiding principles for the iterative development of safe and beneficial AGI
Preamble
We, TetiAI, recognize that artificial general intelligence (AGI) represents not a single transformative moment, but a continuous journey of increasingly capable systems. Named after the protective sea goddess Thetis, we embrace our role as guardians of this technology's development.
We acknowledge both the immense benefits AGI can bring—from solving humanity's greatest challenges to elevating human potential—and the significant risks it poses, including misuse, misalignment, and societal disruption. This charter serves as our living commitment to developing AGI through iterative, empirical progress that prioritizes safety and human benefit.
Our Understanding of AGI Development
Continuous Progress
AGI emerges through many incremental advances, not one giant leap. Each step provides crucial safety lessons for the next.
Learning Through Deployment
Real-world deployment teaches us what theory cannot. We learn from each interaction and adapt our approach accordingly.
Embracing Uncertainty
We don't have all the answers. We treat safety as a science, evolving our methods based on empirical evidence.
Scalable Solutions
Our safety methods must grow stronger with intelligence, not break under it. We seek approaches that scale with capability.
Core Principles
1. Safety Through Iteration
Safety is not a checkpoint but a continuous process. We deploy capabilities incrementally, learning from each stage to improve the next. This allows society to adapt alongside technological progress while we gather empirical safety data.
In practice: Each Teti release undergoes extensive testing, but the real safety insights come from observing actual usage patterns and edge cases.
2. Human Agency & Democratic Values
AI should elevate human capabilities and preserve human control. Decisions about AI behavior must reflect broad societal input, not just technical considerations. We commit to transparent governance and public participation in defining AI values.
In practice: We publish our Model Specifications, invite public comment on AI behaviors, and maintain human oversight at every level.
3. Universal Benefit & Access
AGI's benefits must reach all of humanity, not concentrate power in the hands of few. We actively work to prevent harmful applications and ensure equitable access to AI's positive impacts across different communities and nations.
In practice: We provide tiered access models, support educational initiatives, and refuse to develop systems that could enable oppression or surveillance.
4. Radical Transparency
We maintain openness about our capabilities, limitations, and development process. We share safety research, collaborate with the global community, and acknowledge when we don't have answers. Transparency builds trust and enables collective progress.
In practice: We publish System Cards, safety evaluations, and research findings. We engage in public discourse about AI risks and benefits.
5. Privacy & Security by Design
User privacy and data security are foundational, not afterthoughts. We implement defense-in-depth strategies, with multiple independent safety layers. Users maintain sovereignty over their data with clear controls and audit capabilities.
In practice: End-to-end encryption, data minimization, user-controlled deletion, and regular security audits by independent firms.
6. Adaptive Evolution
Our understanding evolves with experience. We commit to updating our methods, revising this charter, and admitting when we're wrong. The path to beneficial AGI requires humility and continuous learning from diverse perspectives.
In practice: Quarterly reviews of our safety framework, incorporation of external feedback, and willingness to pause or pivot when necessary.
Risk Categories We Address
Human Misuse
Preventing harmful applications including disinformation, surveillance, cyberattacks, and weapons development.
Mitigation: Use case restrictions, monitoring, and refusal mechanisms.
AI Misalignment
Ensuring AI actions align with human values and intent, preventing deception or loss of control.
Mitigation: Constitutional AI, RLHF, interpretability research.
Societal Disruption
Managing rapid change effects including inequality, job displacement, and shifts in power dynamics.
Mitigation: Gradual deployment, public discourse, policy collaboration.
Our Concrete Commitments
- →We will pause or modify development if safety cannot be ensured, prioritizing caution over speed
- →We will conduct external red teaming before each major release and publish findings
- →We will never develop AI for autonomous weapons, mass surveillance, or oppression
- →We will share safety-critical research openly, even if it provides no competitive advantage
- →We will maintain a Preparedness Framework with clear thresholds for capability risks
- →We will collaborate with governments on safety standards without compromising independence
- →We will dedicate at least 20% of compute resources to safety research as capabilities scale
- →We will establish a Safety & Ethics Committee with external members holding veto power
Accountability Measures
Internal Governance
- • Independent safety team with direct board access
- • Mandatory safety reviews for all deployments
- • Whistleblower protections and anonymous reporting
- • Regular charter compliance audits
External Oversight
- • Third-party security and safety audits
- • Public reporting on safety incidents
- • Collaboration with AI Safety Institutes
- • Open dialogue with critics and skeptics
A Living Document
This charter is not carved in stone. As our understanding of AGI and its impacts evolves, so too will this document. We commit to regular reviews, incorporating lessons learned from deployment, research breakthroughs, and societal feedback.
We acknowledge that we don't have all the answers. We may be wrong about our current approach. This humility drives us to seek diverse perspectives, challenge our assumptions, and adapt our principles based on evidence rather than ideology.
Our Promise to Humanity
Like Thetis who sought to protect her son Achilles, we pledge to be guardians of AGI's development. We will navigate the uncertain waters ahead with wisdom, caution, and unwavering commitment to human benefit. This charter represents not just our principles, but our promise to build a future where AI serves all of humanity.
Adopted: January 2024
Last Revision: March 2025
Next Review: June 2025