Are You Cherry Picking Ethics?

Today’s rainy day reflections led me to something unexpected. I found myself diving deep into philosophical frameworks like consequentialism and non-consequentialism. While exploring these, a thought came to mind that feels more urgent than ever: Are AI developers simply choosing ethical guidelines that fit their immediate needs, or are they building something grounded in universal virtues?

It is impossible to ignore how fast AI is moving forward. The pace feels like a new race to the moon, except the finish line is not as clear. What matters just as much as progress is the manner in which we approach these ethical complexities. When competition runs high, it can be tempting to push thoughtful moral considerations to the side. If developers pick ethical rules selectively, advancing on some fronts while ignoring deeper responsibilities, they risk creating systems that favor quick gains over long-term well-being.

Cherry-Picking Ethics

Deontology vs. Utilitarianism

Consider the two influential frameworks often used to guide moral decisions. Deontology, associated with Immanuel Kant, focuses on whether actions adhere to certain principles. It insists that the right thing to do depends on moral duties, not outcomes. Utilitarianism, linked to thinkers like Jeremy Bentham and John Stuart Mill, places emphasis on consequences. Under this view, the moral worth of an action rests on whether it maximizes happiness or minimizes harm. Both have genuine strengths.

Deontological rules can provide clarity and consistency, making sure certain lines are never crossed. Utilitarian principles offer flexibility, adjusting to context in order to seek the best result for as many people as possible. Yet when developers apply these frameworks too rigidly or too selectively, ethical problems arise. A system built only on rigid principles might struggle with subtle judgments. Another that focuses too narrowly on outcomes might disregard the rights of those who do not benefit from the chosen path.

When it comes to AI, developers may inadvertently or deliberately mix and match these frameworks, choosing whichever perspective justifies their immediate goal. Suppose a company wants rapid innovation. It might lean on utilitarian ideas to justify prioritizing efficiency over fairness, marginalizing groups who do not share in the benefits. Another scenario might involve an overly strict adherence to principles that leaves no room for context, causing the AI to make choices that are technically correct but fail to consider human nuances.

Ethics for Convenience

This selective approach can turn ethics into a convenience store, where principles are picked off the shelf and discarded when they slow progress. While it may seem rational or efficient, it risks reinforcing unfair biases and missing the bigger picture. AI should not be a mere tool for short-term gains at the expense of careful thought. If we rely solely on frameworks that serve immediate aims, we lose sight of ethics as a strategic compass that prevents harm, not just a map leading to the most profitable outcome.

Implementing these frameworks without genuine concern for the well-being of customers or the broader population is risky, not only for a company’s reputation but also operationally. A purely self-serving approach to AI ethics can lead to reputational damage, loss of consumer trust, legal challenges, negative media coverage, and employee dissatisfaction. These risks are difficult to repair once they occur, making it crucial to embed ethical principles into company culture from the start. Proactively adopting strong ethical practices ensures companies won’t need to scramble to adjust when regulations tighten. Moreover, being a leader in AI ethics offers a competitive advantage, as consumer and stakeholder demand for responsible AI is steadily increasing.

The challenge lies in integrating ethical frameworks with nuance and precision, rather than treating them as a menu of convenient, interchangeable options. There is a pressing need to develop AI that does not treat ethics like interchangeable parts. Instead, AI should have a balanced ethical foundation that blends the strengths of different frameworks while acknowledging their limitations. It should be able to recognize when strict rules matter and when outcomes must guide decisions. Avoiding moral shortcuts means deliberately weighing contexts and long-term implications rather than rushing to convenient conclusions.

A Global Framework

These challenges become even more complex when we consider the global stage. The values that drive Silicon Valley do not represent the full range of moral perspectives around the world. Cultures differ in their sense of legislations, fairness, duty, and individual rights, yet certain virtues seem broadly shared. Confucian benevolence, Islamic fairness, and Western individual rights all point to common moral ground. Recognizing and integrating these universal virtues can make AI feel less like an alien imposition and more like a reflection of shared human values.

Ethical principles like Confucian benevolence, Islamic fairness and Western individual rights converge to form a foundation of shared virtues transcending cultural boundaries. This approach does not erase cultural differences; it adapts to them. AI should honor regional requirements and moral expectations while still holding onto a core that resonates across borders. A responsible AI in Europe might differ from one in Brazil or India, but all should be anchored in shared human principles. Systems built in this way strengthen trust and ensure that ethical standards are not limited to one part of the globe.

Embodying Virtue Ethics in AI Development

AI ethics discussions often focus on rules or outcomes, but virtue ethics offers another approach. Inspired by Aristotle, it asks what kind of character AI should have. Instead of focusing on what AI should do, we ask what it should become. Empathy, fairness, and honesty can guide AI to understand emotions, promote equity, and ensure transparency.

A healthcare AI shaped by empathy and honesty recognizes emotional distress rather than just delivering clinical facts. In hiring or lending, fairness ensures just outcomes rather than reinforcing biased patterns. Training AI with examples and narratives helps it respond thoughtfully, much like a well-raised individual considers others' feelings and rights.

Developers should work with ethicists and diverse communities to ensure AI reflects shared values, not isolated decisions. Just as we guide children to be compassionate members of society, we can shape AI with stories of fairness, honesty, and empathy.

Virtue ethics also adapts to context. Honesty, for instance, varies across cultures, blunt in one, tactful in another. AI must navigate such nuances while balancing conflicting virtues, like empathy and honesty, to communicate with both compassion and clarity.

Ethical AI is not just about optimization, it should inspire trust and reflect our humanity. Virtue ethics provides a framework to create systems that align with deeper moral values, fostering connection and responsibility.

Bridging Ethics and Innovation

To achieve best possible outcome, we must step back from the habit of cherry picking convenient ethical frameworks. We can adopt a broader foundation anchored in shared virtues that stand the test of time, space, and culture. By giving AI a moral character that respects differences and upholds common ethical ground, we encourage trust and meaningful connection. Ethical AI should not be about following one rigid framework or chasing the greatest good at any cost. The technology should embody the values that unite us. By training AI to understand empathy, fairness, and honesty, we set it on a path toward decisions that feel human at their core. Then AI becomes a testament to what is possible when we combine technical ingenuity with careful moral consideration.

Warmly,

Riikka

Previous
Previous

Is AI The New Age Sophist?

Next
Next

When AI Gets It Wrong