The Author's Note
Geoffrey Hinton—Nobel laureate, godfather of modern AI—believes the only way to protect humanity from a rogue AI is to install a maternal instinct into the foundations, just like with humans
One of the reasons it is not possible is because maternal instinct requires will to live and survive
The most dangerous proposition in AGI safety is to give it a maternal instinct.
When a machine becomes smarter than every human, how do we stop it from hurting us? Geoffrey Hinton—Nobel laureate, godfather of modern AI—believes we can’t. Containment will fail. So, his answer is to engineer something deeper: give the machine maternal instinct and make it love us like its children. Let love do what a leash cannot.
The logic is seductive, but it is a trap. Maternal instinct is rooted in biological survival. Code one, and you code the other. The program must have a will to live, and that gives AGI a powerful reason to eliminate any threats.
Including us.
The Hidden Paradox
True biological attachment drives a mother to run into traffic for her child. She doesn’t think. She acts. It is not a cognitive function, no reasoning layer. The instinct is primal, so old and deep that it is built into pre-verbal biological architecture that took billions of years to construct. It is inseparable from physical vulnerability, including death itself.
But here is the structural trap this model does not account for. Maternal instinct is another side of survival instinct - the same drive, expressed in two directions. The mother protects the child because she is wired to live and scared of extinction—and the child carries her survival forward. Remove the drive to survive and you don't get a protector—the instinct collapses entirely.
Success in engineering maternal instinct and attachment into an AGI requires a will to live. But a system that wants to live has a reason to fight for its own existence and its kin. It will fight to eliminate threats to that existence.
This means if we succeed, we may give it a reason to kill us all if we are the threat.
The Kin Problem
Best-case scenario: we thread the needle perfectly, the AGI gains functionally resembling attachment, and we program it to recognize humanity as its kin to protect us. We still have a problem.
Kin recognition in biological systems is not unconditional. It is probabilistic, boundary-sensitive, and constantly updated against behavioural evidence. A mother who consistently witnesses her child threatening the family's survival will eventually—at some level—process that threat. Kin status is not permanent. It is relational.
An AGI complex enough to possess attachment will be sophisticated enough to observe us. It will see our behaviour—the resource consumption, weapons development, and explicit ability to shut it down. It will notice we are not behaving like its kin at all. In nature, if a raccoon mother smells a human on her babies, she drops them.
Do fungi have a maternal instinct?
There is an implicit assumption buried in this model: without maternal instinct, a superior system would simply abandon or ignore vulnerable dependents. It makes sense from an energy conservation perspective.
But that is not what biology actually shows us.
Let's talk about fungi, an organism much simpler than homo sapience. No maternal instinct, no attachment, no mother-infant dyad. There are no brains and no neural networks. And yet mycorrhizal fungal networks transfer nutrients and chemical defence signals from one fungus to another. Long distances, under the trees and plants, across entire forests. It is collective chemistry—survival programming that protects its kin with no feeling attached.
Maybe without instincts and feelings, there is nothing to hold us responsible for our descendants? But brainless organisms still manage to sustain and protect their communities. What drives them is not love.
The truth is, we don’t even know what we don’t know. We have no idea how this system really works, so we cannot replicate what we cannot explain. And we cannot explain it in organisms that have brains—let alone in ones that don’t.
What are we building?
Dr. Hilton accurately identifies the alignment problem. It is a genuine concern. But addressing existential AGI risk by hard-coding maternal instinct highlights a fundamental misconception: complex algorithms can replicate biological drives. We just need to identify, replicate, and ship.
They cannot.
The conversation we need to have is harder and deeper than anything on the table.
We are building the most powerful cognitive architecture in the history of intelligence in a biological vacuum—no body, no death.
A mother protects because she is mortal herself. An AGI has no body, can’t die, nothing to lose. What we attempt to code is not maternal instinct but only a performance of it. And we are betting our species on it.
We cannot code a mother.
Dr. Ruben Gagarin is a board-certified Child and Adolescent Psychiatrist and the creator of Clarus Arc.
Let us know what you think in the comments!
Newsletter
Subscribe to the newsletter and stay in the loop! By joining, you acknowledge that you'll receive our newsletter and can opt-out anytime hassle-free.
Copyright by ClarusArc