Are You Cherry Picking Ethics?
Today’s rainy day reflections led me to something unexpected. I found myself diving deep into philosophical frameworks like consequentialism and non-consequentialism. While exploring these, a thought came to mind that feels more urgent than ever: Are AI developers simply choosing ethical guidelines that fit their immediate needs, or are they building something grounded in universal virtues?
It is impossible to ignore how fast AI is moving forward. The pace feels like a new race to the moon, except the finish line is not as clear. What matters just as much as progress is the manner in which we approach these ethical complexities. When competition runs high, it can be tempting to push thoughtful moral considerations to the side. If developers pick ethical rules selectively, advancing on some fronts while ignoring deeper responsibilities, they risk creating systems that favor quick gains over long-term well-being.
Consider the two influential frameworks often used to guide moral decisions. Deontology, associated with Immanuel Kant, focuses on whether actions adhere to certain principles. It insists that the right thing to do depends on moral duties, not outcomes. Utilitarianism, linked to thinkers like Jeremy Bentham and John Stuart Mill, places emphasis on consequences. Under this view, the moral worth of an action rests on whether it maximizes happiness or minimizes harm. Both have genuine strengths.
Deontological rules can provide clarity and consistency, making sure certain lines are never crossed. Utilitarian principles offer flexibility, adjusting to context in order to seek the best result for as many people as possible. Yet when developers apply these frameworks too rigidly or too selectively, ethical problems arise. A system built only on rigid principles might struggle with subtle judgments. Another that focuses too narrowly on outcomes might disregard the rights of those who do not benefit from the chosen path.
When it comes to AI, developers may inadvertently or deliberately mix and match these frameworks, choosing whichever perspective justifies their immediate goal. Suppose a company wants rapid innovation. It might lean on utilitarian ideas to justify prioritizing efficiency over fairness, marginalizing groups who do not share in the benefits. Another scenario might involve an overly strict adherence to principles that leaves no room for context, causing the AI to make choices that are technically correct but fail to consider human nuances.
Misuse in AI: Ethics for Convenience
This selective approach can turn ethics into a convenience store, where principles are picked off the shelf and discarded when they slow progress. The danger is that while it might seem rational or expedient, it risks reinforcing unfair biases and missing the bigger picture. AI should not be a mere tool for short-term gains at the expense of careful thought. If we rely solely on frameworks that serve immediate aims, we lose sight of ethics as a steady compass that prevents doing harm, not just a map that leads us to the most profitable outcome.
Avoiding Ethical Rigidity
There is a pressing need to develop AI that does not treat ethics like interchangeable parts. Instead, AI should have a balanced ethical foundation that blends the strengths of different frameworks while acknowledging their limitations. It should be able to recognize when strict rules matter and when outcomes must guide decisions. Avoiding moral shortcuts means deliberately weighing contexts and long-term implications rather than rushing to convenient conclusions.
A Global Framework
These challenges become even more complex when we consider the global stage. The values that drive Silicon Valley do not represent the full range of moral perspectives around the world. Cultures differ in their sense of fairness, duty, and individual rights, yet certain virtues seem broadly shared. Confucian benevolence, Islamic fairness, and Western individual rights all point to common moral ground. Recognizing and integrating these universal virtues can make AI feel less like an alien imposition and more like a reflection of shared human values.
This approach does not erase cultural differences; it adapts to them. AI should honor regional requirements and moral expectations while still holding onto a core that resonates across borders. A responsible AI in Europe might differ from one in Brazil or India, but all should be anchored in shared human principles. Systems built in this way strengthen trust and ensure that ethical standards are not limited to one part of the globe.
Embodying Virtue Ethics in AI Development
Discussions about AI ethics often focus on rules or outcomes, but virtue ethics opens another path. Inspired by Aristotle, it asks what qualities we want AI to embody, what kind of character we want it to have. Instead of asking what AI should do, we ask what AI should become. Empathy, fairness, and honesty are virtues that can guide AI to understand users’ emotions, promote equity, and maintain transparency.
A healthcare AI influenced by empathy and honesty will recognize emotional distress and not just deliver clinical facts. In fields like hiring or lending, fairness takes on critical importance, helping the AI support just outcomes rather than blindly following patterns in the data. Building these virtues into the AI’s training, through examples and narratives, can help the system learn to respond thoughtfully, just as a well-raised individual learns to consider others’ feelings and rights.
Developers should involve ethicists and diverse communities to ensure the values guiding AI are not selected in isolation. By asking whether a certain direction aligns with shared values and virtues, we reinforce the idea that AI is not merely a machine for optimization but also a creation that carries moral weight.
We can help AI recognize and model virtuous behavior in real-world situations. AI that carefully balances discipline with encouragement to nurture users perseverance and resilience. These approaches ensure that AI prioritizes not just efficiency, but alignment with deeper moral values, creating a foundation for a more humane and ethically sound future.
Training AI to embody virtues involves moving beyond raw data. Narratives, stories of fairness, compassion, or honesty, can help AI recognize and emulate virtuous behavior in real-world contexts. Systems should also reward virtuous actions during training, such as a tutoring AI balancing strictness with encouragement to nurture perseverance in students. These approaches ensure AI doesn’t merely focus on efficiency but aligns with deeper moral values.
Virtue ethics also adapts to context. For example, while honesty is universally valued, its expression may differ across cultures, directness in one setting, tact in another. Developers must design AI that respects such nuances without losing sight of universal virtues. Balancing conflicting virtues, like empathy and honesty, further requires careful programming to navigate complex moral dilemmas.
Empathy and honesty can conflict when delivering uncomfortable truths, as empathy seeks to cushion emotional distress while honesty prioritizes transparency. A well-designed AI must balance these virtues, ensuring compassionate communication without compromising the clarity or accuracy of information. Applying virtue ethics extends to the development process itself. Developers must involve ethicists and diverse communities, continuously asking questions like Should we do this? How can AI reflect our values?
When done well, AI solves problems and mirrors our humanity. Virtue ethics offers a path to create systems that inspire trust, foster connection, and reflect the best of who we are.
Bridging Ethics and Innovation
To achieve best possible outcome, we must step back from the habit of cherry picking convenient ethical frameworks. We can adopt a broader foundation anchored in shared virtues that stand the test of time, space, and culture. By giving AI a moral character that respects differences and upholds common ethical ground, we encourage trust and meaningful connection.
Ethical AI should not be about following one rigid framework or chasing the greatest good at any cost. It should embody the values that unite us. By training AI to understand empathy, fairness, and honesty, we set it on a path toward decisions that feel human at their core. This is not about abandoning progress, but about pursuing innovation with moral wisdom. AI then becomes a testament to what is possible when we combine technical ingenuity with careful moral consideration.
Warmly,
Riikka