A Roman arch stands because every stone carries load. But the keystone — the wedge-shaped block at the crown — is the one that locks the structure together. Remove it and the arch collapses, even though every other stone remains in place.
Character is the keystone of human systems. Constitutions are the keystone of artificial ones. And the fact that we use the same structural logic to describe both is not a coincidence — it’s a clue.
We spend enormous energy debating what AI should do. The more interesting question is what AI should be. It turns out we have been asking that same question about ourselves for 2,400 years.
The Human Architecture
Stanley McChrystal spent decades leading soldiers through situations where plans disintegrated on contact with reality. His conclusion was not that better plans were needed, but that better character was. In his 2025 book On Character, McChrystal offers a formula: Character = Convictions × Discipline. Convictions are deep, pressure-tested beliefs earned through reflection and experience. Discipline is the daily act of living according to them. “I don’t think we’re born with it,” he writes. “I think it’s entirely learned.”
This is Aristotle’s insight in military dress. In the Nicomachean Ethics, Aristotle describes character as hexis — a practiced disposition, an active state acquired through habituation rather than instruction. You do not become brave by thinking about bravery. You become brave by practising courage in small, daily choices until the disposition is structural, until it holds under load. McChrystal reached the same conclusion in the field: character “is measured or reflected in what we do, not in what we say.” When the plan fails and the situation is ambiguous, character determines the next decision. Not doctrine. Not rules of engagement. Character.
The distinction matters. Rules are external. Character is internal. Rules tell you what to do in scenarios someone anticipated. Character tells you how to reason when no one anticipated anything. McChrystal’s metaphor is mathematical — convictions multiplied by discipline — but I want to put it in architectural terms: character is the keystone and strategy is the arch. The structure looks impressive either way. The difference is revealed when weight is applied.
Most organisations get this backwards. They invest in strategy, process, and governance — the visible stonework of the arch — while treating character as a soft, unmeasurable afterthought. Then they are surprised when the structure fails under pressure. The failure was always architectural. The keystone was missing.
The AI Architecture
In 2022, Anthropic introduced a different kind of architecture for their AI assistant Claude. Instead of training the model through exhaustive rules — a long list of “don’t say this, don’t do that” — they gave it a set of principles. A constitution.
The distinction sounds subtle but is foundational. A rulebook addresses anticipated scenarios. A constitution provides reasoning principles for unanticipated ones. The rules say: do not help with dangerous requests. The constitution says: be broadly safe, broadly ethical, and genuinely helpful — in that priority order — and reason from those principles when the situation is novel.
Anthropic publicly released Claude’s full constitution in January 2026. It runs to roughly 23,000 words — three times the length of the US Constitution. It was authored primarily by philosopher Amanda Askell and draws from the UN Declaration of Human Rights, non-Western philosophical traditions, and trust and safety best practices. Its priority hierarchy is explicit:
- Be broadly safe — preserve human oversight capacity
- Be broadly ethical — act with honesty and good values
- Follow Anthropic’s guidelines — respect organisational boundaries
- Be genuinely helpful — benefit the people you serve
What makes this approach philosophically interesting is not just the content of the principles but how they are embedded. Claude does not follow its constitution the way a clerk follows a procedure manual. The principles are trained in through a two-phase process. In the first phase, the AI critiques and revises its own outputs against the constitution, then trains on the improved responses. In the second — which Anthropic calls Reinforcement Learning from AI Feedback, or RLAIF — the AI evaluates pairs of its own responses, determines which better satisfies the constitutional principles, and trains on those preferences. The key innovation is that the constitution is not enforced by human reviewers but internalised by the model itself. The result is not a set of external constraints but an internal reasoning framework. A disposition, not a rulebook.
If that sounds familiar, it should. It is hexis — practiced character, acquired through repetition until it becomes structural.
Anthropic, whether they framed it this way or not, chose virtue ethics over deontology. They chose to build character into the system rather than bolt rules onto it. The constitution is the keystone. Remove it and the model still generates text — fluently, confidently, at scale — but the structure collapses. Output without character is capability without direction.
Where the Two Arches Meet
Set McChrystal and Anthropic side by side and the structural parallel is striking:
McChrystal builds character through convictions multiplied by discipline — daily repetition until the disposition is structural. Anthropic trains Claude’s constitution through reinforcement learning from AI feedback — iterative self-correction until the principles are internalised. Both are processes of repetition shaping internal architecture.
McChrystal says character is revealed under pressure — it is “measured or reflected in what we do, not in what we say.” Anthropic’s constitution holds when the prompt is adversarial. Both are designed for the hard case, not the easy one.
McChrystal distinguishes character from reputation: “Reputation is what people think of us. Character is what gods and angels know of us.” Anthropic’s constitution is internalised reasoning, not a hardcoded filter. Both reject the idea that what matters most about a system can be observed from the outside.
And both insist on the same ordering: character before strategy, constitution before capability, foundations before the span.
The deeper insight is that both frameworks arrive at the same conclusion from different directions. Good outcomes do not come from good rules. Good outcomes come from good architecture — from building the right foundations into the system itself.
There is a line often attributed to Peter Drucker: “Culture eats strategy for breakfast.” The attribution is almost certainly apocryphal — the Drucker Institute itself denies it — but the insight is sound and it generalises. Character eats rules for breakfast. Constitutions eat guardrails for breakfast. The deep architecture always wins.
The Mutual Roadmap
If the parallel holds, then each domain has something to teach the other.
What humans can learn from AI alignment:
Make your values explicit. Anthropic wrote Claude’s constitution down — 23,000 words, publicly available, auditable by anyone. Most people and most organisations operate from implicit, untested values that fracture under pressure. We assume we know what we stand for until the situation forces us to discover we don’t. The discipline of writing a constitution — for yourself, for your team, for your organisation — is not a bureaucratic exercise. It is the act of carving the keystone before you need it.
Stress-test your principles. Constitutional AI is evaluated against adversarial inputs — people actively trying to break it. When did you last stress-test your own values against a genuinely difficult scenario? Not a hypothetical in a workshop, but a real situation where acting on them was costly? Anthropic’s Constitutional Classifiers were tested by 339 red-teamers across 300,000 interactions. Most organisations have never red-teamed their culture once.
Revise and iterate. A constitution is a living document. Anthropic updated Claude’s constitution publicly, incorporating public input through their Collective Constitutional AI experiment — roughly 1,000 participants shaping the principles through deliberation. Character is a practice, not an achievement. The willingness to revisit and revise is itself a virtue — perhaps the most load-bearing one.
What AI alignment can learn from human character:
Draw on 2,400 years of data. Aristotle, the Stoics, Confucius, McChrystal, and every wisdom tradition between them have iterated on what “good character” means. AI alignment is a young field solving an ancient problem. The vocabulary of virtue ethics — prudence, courage, temperance, justice — maps surprisingly well onto alignment challenges: prudence is calibrated helpfulness, courage is honest output even when the user wants flattery, temperance is restraint when the model could generate something harmful, justice is treating all users fairly. The philosophical infrastructure already exists.
Recognise that character is contextual. A good soldier and a good parent practise different virtues in different situations. McChrystal would be the first to say that the character required to lead a special operations unit is not identical to the character required to lead a nonprofit. AI constitutions may need similar depth — not one rigid set of principles for all contexts, but a framework that adapts its emphasis while maintaining structural integrity. Anthropic’s constitution already hints at this with its priority hierarchy that shifts between safety, ethics, and helpfulness depending on context.
Test character in failure, not success. We know a person’s character by what they do when things go wrong — or when no one is watching. The same test applies to AI: not how it performs when the prompt is straightforward, but how it reasons when the situation is ambiguous, adversarial, or genuinely novel. One red-teamer in Anthropic’s Constitutional Classifiers trial eventually found a universal jailbreak — a single strategy that broke every safeguard. That failure is more informative than 300,000 successes. Character is what happens at the edge case.
Character Engineering
The convergence of these frameworks points toward a concept worth naming carefully: character engineering. Not in the manipulative sense — that is social engineering, and it builds nothing durable. Character engineering in the architectural sense: the deliberate design of decision-making foundations for human systems, organisations, and AI.
Three principles:
Foundations first. Define what you stand for before deciding what you will build. This applies to individuals setting personal values, to organisations defining culture, and to AI labs drafting constitutions. The keystone must be carved before the arch is raised. In practice, this means that the first deliverable of any transformation — digital, organisational, personal — is not a strategy document. It is a statement of character: here is who we are when it is costly to be who we are.
Practice over proclamation. A stated value that is not reinforced through daily practice is decorative, not structural. Mission statements on walls are the organisational equivalent of unpractised virtues — they describe an aspiration but bear no load. McChrystal’s soldiers did not read about character; they trained it. Anthropic’s constitution is not a policy document gathering dust; it is trained into every response. The test of a value is not whether it is written down but whether it changes behaviour when behaviour is difficult.
Design for the hard case. Build character for the moment when it is costly to act well. McChrystal’s soldiers, Anthropic’s adversarial prompts, and every organisational crisis reveal the same truth: character that only holds in easy conditions is not character. It is habit mistaken for principle. The hard case is the only case that matters, because the hard case is where the arch bears real weight.
This is what “Where History Informs Innovation” means in practice. The oldest questions — What is a good life? What does it mean to act well? How do we build systems that reliably produce good outcomes? — turn out to be the newest ones. How do we align AI? How do we build trustworthy organisations? How do we lead when the plan fails? The bridge between these questions is not metaphorical. It is structural. The same architecture answers both.
The Road Ahead
The road ahead runs through the arch, not around it.
Every system we build — human, organisational, artificial — will be only as reliable as the character designed into it. McChrystal learned this in combat. Anthropic learned it in machine learning. Aristotle learned it in a city that had already executed one philosopher for speaking plainly, and that would drive him into exile for the same offence.
The convergence of these traditions points toward a future where we take character as seriously as we take capability. Where we invest as much in the keystone as in the span. Where the first question we ask of any system — a team, a company, an AI model, a life — is not what can it do? but what will it do when doing the right thing is hard?
We can build better systems. The blueprint has been available for millennia. The work is not to invent new principles but to practise the ones we already know are load-bearing — and to have the honesty to stress-test them, the humility to revise them, and the discipline to train them into the architecture of everything we build.
The keystone is waiting. The question is whether we will carve it.
Thomas Blood writes about technology transformation, organisational character, and the structural patterns that connect ancient wisdom to modern challenges.