Column: Image to Image: Musings on Faith, Media, and Story
December entry: “The Moral Catechism of Sam Altman”
Column Description: Image to Image: Musings on Faith, Media, and Story is a monthly column that illuminates old and new ideas about media ecology from a Christian perspective. Dr. Mitchell will explore what it means to bear God’s image and Christian witness in a mediated world, with a particular focus on the relationships between theology, media, and orthopraxy across different Christian traditions.
By Chase Mitchell, Ph.D.
Assistant Professor of Media and Communication, East Tennessee State University
September 2025 / April 2025 / March 2025 / February 2025 / January 2025 / December 2024 / November 2024 / October 2024 / September 2024 / August 2024 / September 2023 / June 2023 / January 2023 / September-October 2022 / July 2022 / June 2022 / April-May 2022 / January 2022 / November 2021 / October 2021 / September 2021
The Moral Catechism of Sam Altman
September 2025
I recently watched an interview with Sam Altman, CEO of OpenAI and one of the biggest players in the tech/AI industry. The host pressed Altman on the question of alignment: Who’s making decisions about the moral weights that drive AI algorithms? That is, who decides which human values shape the AI outputs? How are those decisions made? Since tools like ChatGPT are increasingly ubiquitous in our lives, shouldn’t we be privy to these things’ “moral source code”?
Altman explains that the company’s Model Spec is the constitutive document that shapes GPT’s moral reasoning. The Model Spec functions much like other meaning-making texts—such as the Catechism of the Catholic Church or the United States Constitution—in that it establishes boundaries, articulates principles, and provides interpretive guidance for behavior. Yet unlike those documents, which derive authority either from divine revelation or from the juridical structures of a polity, the Model Spec is not meant to clarify tenants or protect precepts. Instead, it’s designed to reflect broader cultural shifts as they occur over time. In this sense, the Model Spec is accessible but not foregrounded, public yet less clear or definitive than a catechism or a constitution, because its authority lies in iterative adaptation rather than in transcendent or legally binding norms. Its technical framework embodies a kind of social constructivism: meaning and constraint are not discovered or revealed but continuously negotiated through human agency and collective autonomy.
In explaining the Model Spec, Altman is admitting, in a roundabout way, that, according to OpenAI, meaning is, ultimately, mere social construct: Man makes his own meaning; thus, meaning is always subject to change; and so, as arbiters of our own values, Man is unbeholden to God or any other transcendent sense of truth or morality. Altman and others like him are intentionally baking this worldview into the technology. In the interview, he presents how this “works” in the context of AI, and he does so in the most intelligent, articulate, agreeable, calming, and (thus) chilling way.
The OpenAI ethos sounds very democratic, until you realize that a) their products insinuate and propagate a worldview that says everything, ultimately, is negotiable; and b) although the Model Spec is ostensibly meant to integrate and reflect popular sentiment, the technologists can—at any time and without fanfare—tweak the alignment criteria to render AIs that inculcate their preferred ideologies.
Altman is either unaware of—or aware but unmoved by—the logical conclusion of his secular, techno-utopian vision. His is a world in which nothing is, finally, sacred. Tech lords can wrap pleasant and even inspiring bows around the AI project, but if it’s detached from objective truth (and, as a Christian, I would argue, divine revelation), our already-broken world will devolve further into chaos and fragmentation.
I’m no Luddite. I carefully use ChatGPT and other tools, but only because I trust in God’s good will and providential means. I subscribe to Jacques Ellul’s view, as he puts it in The Meaning of the City:
By this means God gets a foothold in man’s world. He chooses a city, or rather he lets man choose a city for him (after all the city belongs to man!), and by accepting from David’s hands the consecration of man’s counter-creation, God intervenes in the world where man wanted to refuse him entrance. And it is by the hand of man himself that it happens. God does not act as a master able to break down the barriers set up by man, to bring down the walls of Jericho, or to break the gates of Damascus. He does not act as a judge, far above every effort of man to revolt against him […] God meets man on his own ground, on his own terms. As he meets Satan and his spiritual powers where they are.[i]
In other words, the Cruciform God accounts for all our vainglorious enterprises in His good plan, including our attempts to make idols of ourselves and—in the case of AI—in our own image. Ellul was writing about the City as an icon of man’s idolatrous pursuit of technological progress, but his claim just as well applies to artificial intelligence. Even as we co-opt AI for redemptive ends, assured of Christ’s ultimate victory, we must be wary of AI creators’ spiritual bent. They veil their ideologies well, but sophistication is often just the mask of deception.
Notes
[i] J. Ellul (1970). The Meaning of the City. Grand Rapids, MI: William B. Eerdmans Publishing Co., p. 101.