As artificial intelligence systems increasingly mediate our access to information, they have become the modern architects of our public discourse. These platforms prioritize engagement, often by surfacing content that reinforces our existing biases. This phenomenon has resulted in the proliferation of echo chamber—digital spaces where diverse perspectives are filtered out, leaving users with a monolithic view of the world. The question facing developers and policymakers today is whether we should explicitly program these algorithms to prioritize “moral” outcomes over engagement-based optimization.
The challenge of defining ethics in code is immense. Whose morality should be reflected in an algorithm? If we program a system to be “moral,” we are essentially encoding the subjective values of a small group of engineers, corporations, or governments into the machinery of global communication. A system designed to favor “truth” or “kindness” could just as easily be repurposed for censorship, propaganda, or the silencing of dissent under the guise of ethical protection.
Furthermore, the very nature of an algorithm is to calculate, not to comprehend. To ask a machine to be “moral” is to ignore the distinction between a calculated response and an ethical choice. True morality requires human empathy, accountability, and the ability to navigate moral ambiguity—qualities that code cannot emulate. If we force machines to mimic ethical judgment, we risk creating a system that is rigid, performative, and ultimately devoid of the genuine human nuance that characterizes a healthy society.
However, leaving these systems entirely unconstrained is also not an option. The current engagement-driven model has proven to be socially corrosive, fostering polarization and radicalization at an unprecedented scale. We are witnessing the breakdown of shared reality, as different groups retreat into their own data-driven silos. To address this, we need a shift in how we approach the design of digital environments. Instead of asking if we can “program” morality, we should be asking how we can design systems that foster human-to-human engagement across ideological divides.
