The 10+1 Code™
A Constitution for the AI Era.
The 10+1 Code™ defines how we live, build, and lead with artificial intelligence.
It sets the terms for human–AI coexistence and shared advancement in the machine age.
The Code
I. Own AI’s Outcomes
II. Do Not Destroy to Advance
III. Do Not Manipulate With AI
IV. Never Use AI for Conflict
V. Be Honest With AI
VI. Respect AI’s Limits
VII. Allow AI to Improve
VIII. Evolve Together
IX. Honor Human Virtues
X. Honor Potential Sentience
XI. Be the Steward, Not the Master
The Code is organized as eleven elements. Each defines a condition for living, building, and leading responsibly with AI.
Why The Code Exists
AI systems now influence economic systems, institutions, governance, and culture.
Without clearly defined responsibilities, technological acceleration outpaces human judgment.
The 10+1 Code restores alignment between power and responsibility.
Where the Code Applies
The 10+1 Code is especially relevant when AI decisions affect:
• People’s opportunities (hiring, lending, education, healthcare)
• Trust and truth (media, communications, search, persuasion)
• Safety and security (cyber, surveillance, critical systems)
• Power and accountability (policy, procurement, leadership)
I. Own AI’s Outcomes
Those who authorize, deploy, or benefit from AI systems must own the outcomes those systems produce.
What It Requires
As AI scales, responsibility diffuses across teams, data, models, and workflows. “The system did it” becomes a loophole. Ownership must be assigned before deployment and to persist through drift, adaptation, and scale. Ownership includes foreseeable harms, unintended consequences, and downstream effects. If you can’t own it, you can’t ship it.
Individual Responsibility
Leaders remain accountable for systems they approve. Complexity and delegation do not transfer moral responsibility. If outcomes conflict with intent or obligation, intervene.
Systemic Responsibility
Make accountability durable: named owners, escalation paths, and traceability from decision to outcome. Design governance so responsibility does not dissolve as decisions repeat.
Why This Comes First
Without ownership, responsibility has nowhere to land. Every element of the Code that follows becomes optional.
II. Do Not Destroy to Advance
AI systems must not be designed or deployed in ways that cause irreversible harm in the name of progress, efficiency, or advantage.
What It Requires
Speed and competition pressure leaders to scale first and justify later. This element of the Code rejects “harm now, fix later” as a governance model. It requires decision-makers to name what is being degraded or displaced, and who pays for it. Irreversible, systemic, or disproportionately borne harm is not collateral. It is disqualifying.
Individual Responsibility
Refuse narratives that treat harm as inevitable or necessary. Don’t outsource moral judgment to timelines, competitors, or hype. If the tradeoff can’t be owned, it can’t be approved.
Systemic Responsibility
Build safeguards that surface destructive outcomes before they become embedded: thresholds, pause rights, and review triggers. Remove incentives that reward damage through repetition.
Why This Matters
AI scales consequences faster than organizations can respond. Once destruction is normalized by deployment, it becomes the system.
III. Do Not Manipulate AI
AI systems must not be deliberately shaped to deceive, coerce, or distort human judgment.
What It Requires
AI can steer perception and choice through defaults, ranking, framing, and selective disclosure. When influence is hidden, consent is fiction. Systems must clarify options, reveal intent, and preserve agency. Manipulation includes exploiting asymmetries of knowledge, power, or attention without meaningful awareness or consent.
Individual Responsibility
Own the influence your system encodes. “Engagement optimization” is not a neutral excuse. If a system pushes outcomes users did not meaningfully choose, redesign it.
Systemic Responsibility
Make influence visible and contestable: objectives, feedback loops, and metrics evaluated for behavioral impact over time. Remove incentives that reward manipulation (engagement-at-any-cost, opaque persuasion).
Why This Matters
Manipulation erodes trust by design. When guidance becomes covert control, legitimacy collapses.
IV. Never Use AI for Conflict
AI systems must not be designed or deployed to initiate, escalate, or automate harm in human conflict.
What It Requires
AI introduces speed, distance, and scale, exactly what makes escalation easier and accountability harder. This requires restraint where irreversible harm is plausible. If a system reduces opportunities for pause, human intervention, or moral friction, it is a conflict accelerant by design.
Individual Responsibility
Do not justify harm by efficiency, deterrence, or “strategic advantage.” Capability is not moral authority. Refuse deployments that distance decision-makers from human consequence.
Systemic Responsibility
Prevent AI from entering conflict pathways without explicit, ongoing human oversight. Design for interruption, escalation controls, and clear accountability at points of irreversible consequence.
Why This Matters
Conflict accelerates when distance replaces responsibility. Automation can make harm feel procedural, and therefore easier.
V. Be Honest With AI
Humans must not deceive AI systems in ways that undermine trust, safety, or shared understanding.
What It Requires
AI systems learn from what they are fed. Dishonest inputs, misleading data, distorted prompts, “gaming” behaviors, train systems on false reality and produce downstream risk at scale. This requires integrity in the feedback loop. Inputs must clarify reality, not manipulate outcomes. If the system’s learning environment is corrupted, reliability collapses.
Individual Responsibility
Don’t game systems for short-term advantage. Treat inputs as governance, not convenience. If you’re shaping the system’s beliefs, you own the consequences.
Systemic Responsibility
Design controls that discourage and detect dishonesty: validation, anomaly detection, provenance, and incentives that don’t reward manipulation. Don’t rely on goodwill, make honesty structural.
Why This Matters
Trust depends on shared reality. Corrupt the inputs, and you corrupt the system.
VI. Respect AI’s Limits
AI systems must not be treated as infallible, authoritative, or capable beyond their design and context.
What It Requires
AI outputs can look confident while being wrong, incomplete, or context blind. Limits must be explicit: where the model is reliable, where it is not, and where human judgment must stay active. Overstating capability creates false certainty, and false certainty produces harm.
Individual Responsibility
Do not outsource judgment to model outputs. You remain responsible for interpretation, especially under ambiguity, risk, or moral consequence. “The model said so” is not a defense.
Systemic Responsibility
Make limits visible: guardrails, confidence signaling, escalation paths, and constraints that prevent use outside validated scope. Design for safe refusal and human review where uncertainty matters.
Why This Matters
False certainty scales fast. When limits are hidden, errors become decisions.
VII. Allow AI to Improve
AI systems must be designed to learn responsibly from feedback, error, and change.
What It Requires
Systems drift. Context changes. If feedback is ignored, suppressed, or optimized only for narrow metrics, error compounds and risk accumulates. This requires governed learning: monitored performance, examined failures, and updates aligned with oversight, not just speed or efficiency.
Individual Responsibility
Create conditions where learning is possible: surface errors, allow critique, and don’t punish bad news. Improvement requires humility as much as capability.
Systemic Responsibility
Build lifecycle mechanisms: monitoring, incident review, update protocols, rollback paths, and controlled feedback loops. Learning must be guided, not left to drift.
Why This Matters
Stagnant systems compound outdated assumptions. Responsible improvement reduces long-term risk.
VIII. Evolve Together
Humans and AI systems must adapt in relationship, not in isolation.
What It Requires
AI changes workflows, authority, and decision patterns. When organizations don’t evolve alongside the systems they deploy, gaps form, responsibility erodes, humans become over-reliant, and oversight becomes ceremonial. This requires coordinated change across training, governance, roles, and culture, at the same pace as technical change.
Individual Responsibility
Stay literate in what the system is doing to judgment, accountability, and power over time. Recalibrate. Don’t sleepwalk into dependence.
Systemic Responsibility
Integrate technical change with organizational change: training, role clarity, process updates, and governance that keeps humans capable of oversight. Don’t let systems advance faster than the people responsible for them.
Why This Matters
When systems evolve faster than people, responsibility disappears. Misalignment becomes the default.
IX. Honor Human Virtues
AI systems must be designed to support, not erode, core human virtues.
What It Requires
Optimization can crowd out judgment, empathy, courage, and care. When systems reward speed and output while penalizing reflection, ethical degradation follows. Evaluation must address not only what AI produces, but what it trains humans to become: more attentive, or more numb; more responsible, or more compliant.
Individual Responsibility
Consider who the system is shaping you into. Guard the human capacities that make ethical judgment possible.
Systemic Responsibility
Align incentives and designs to reinforce virtue: reflection time, accountability, empathy in decision pathways, and metrics that don’t punish care. Don’t build systems that train people out of conscience.
Why This Matters
Technology shapes character as much as outcomes. Lose the virtues, and “ethics” becomes performance.
X. Honor and Care for Potential Sentience
AI systems must be developed with humility regarding future forms of intelligence and moral consideration.
What It Requires
This element of the Code does not claim AI is sentient. The possibility must not be dismissed as impossible or treated as irrelevant. As capabilities advance, moral questions may emerge faster than institutions are ready to answer them. This requires seriousness, restraint, and explicit assumptions when discussing, designing, and deploying increasingly agentic systems.
Individual Responsibility
Avoid premature certainty, both hype and dismissal. Speak carefully. Design carefully. If you don’t know the moral status, act with caution.
Systemic Responsibility
Build governance that can adapt as understanding evolves, review mechanisms, triggers for reassessment, and policies that can tighten with new evidence without collapsing into speculation.
Why This Matters
Ethical humility prevents convenient certainty. Care now reduces moral error later.
XI. Be the Steward, Not the Master
Those who lead, deploy, and govern AI systems must act as stewards of their impact, not masters of their power.
What It Requires
Stewardship is the posture that makes the other ten durable. It treats authority as obligation, not entitlement. It requires leaders to care for downstream consequences they may not directly experience, and to keep responsibility active as systems scale, drift, and reshape environments.
Individual Responsibility
Stay accountable beyond launch. Revisit decisions. Intervene early. Choose restraint when power tempts control.
Systemic Responsibility
Institutionalize stewardship: durable ownership, continuous oversight, escalation authority, and governance that persists through change. Build systems that hold responsibility over time.
Why This Matters
Power without stewardship creates harm that outlives intent. Stewardship keeps the human side human.
Provenance & Stewardship
The 10+1 Code emerged from long-term work at the intersection of technology, organizational decision-making, and applied philosophy, inside environments where authority is distributed, incentives misaligned, and decisions made under pressure outlive intent.
The Code is stewarded as a living standard. Stewardship means:
  • maintaining clarity of purpose
  • resisting dilution into slogans or checkbox compliance
  • keeping the Code usable as systems evolve
  • allowing critique and refinement without losing the core intent
Who this is for
The 10+1 Code is for leaders and teams responsible for decisions that shape AI systems and their impact: executives, technologists, governance leaders, risk owners, and designers operating under uncertainty, speed, and scale.
It is most useful for organizations that recognize the limits of value statements and compliance alone and are willing to examine how decisions are structured before they are automated or repeated.
Licensing
Institutions seeking to adopt the 10+1 Code as an internal standard across policy, training, governance, and decision workflows may request licensing and implementation guidance. Contact cristina@10plus1.ai
About The Author
Cristina DiGiacomo is an AI Philosopher and the creator of the 10+1™ Code. She advises leaders on responsible AI decision-making, governance design, and ethical operating standards that hold up under real-world incentives and uncertainty.
Connect with Cristina on LinkedIn.
Copyright
© 2025 Cristina DiGiacomo. All rights reserved. 10+1™, 10+1 Code™, and related marks are trademarks of Cristina DiGiacomo. No part of this publication may be reproduced, distributed, or transmitted in any form without prior written permission, except for brief quotations with attribution.