The Ethics Crisis Didn't Start With AI. It Started With Us.
by S. Charles Bivens ThM
| Theologian | Teller of Stories | Ethicist |
We keep asking: "Can we trust AI?"
But maybe the deeper question is: "Can we trust the people deploying it?"
When we create entire roles, such as "AI Ethicist" or "Consistency Coordinator," we think we're solving a technical problem. But what we're really doing is putting human-shaped band-aids over systemic failures in leadership, culture, and purpose.
The real crisis isn't machine-generated lies. It's human indifference to truth.
General managers don't talk about right and wrong. Associates don't have time to ask what's just. Ethics becomes whatever keeps the brand out of trouble.
And so we train the AI in the same way we've trained ourselves: Get results. Hit metrics. Say what's allowed. Don't ask what's right.
To build trustworthy AI, we must first establish trustworthy institutions. That means revisiting the old question: "How shall we then live?"
The machines are mirroring us. Maybe it's time we looked in the mirror.
The Convenient Fiction of Technological Neutrality
We tell ourselves that technology is neutral, that algorithms are objective, that data doesn't lie. These comfortable myths allow us to abdicate responsibility for the systems we create and deploy. When an AI system produces biased hiring recommendations, we blame the training data. When a recommendation algorithm amplifies misinformation, we point to user engagement patterns. When automated decision-making systems perpetuate inequality, we invoke the complexity of machine learning as if it were an act of God.
But technology has never been neutral. Every line of code embeds the assumptions, priorities, and blind spots of its creators. Every dataset reflects the world that generated it, complete with all its historical injustices and present inequalities. Every metric we optimize for represents a choice about what matters and what doesn't.
The fiction of neutrality serves a purpose: it allows us to avoid the harder conversations about power, responsibility, and values. It lets us pretend that the problems with AI are technical problems that can be solved with better algorithms, more diverse datasets, or cleverer mathematical techniques. But the problems with AI are human problems, and they require human solutions.
Consider the recent wave of AI-powered hiring tools that have been found to discriminate against women, minorities, and older workers. The standard response has been to blame biased training data or inadequate testing. However, this overlooks a deeper issue: these tools were developed by companies that were already struggling with diversity and inclusion. The AI didn't create the bias; it automated and scaled existing biases that were already embedded in organizational culture and decision-making processes.
The same pattern repeats across industries and applications. AI-powered content moderation systems reflect the cultural biases and political assumptions of their creators. Predictive policing algorithms amplify existing patterns of over-policing in certain communities. Credit scoring models perpetuate historical patterns of financial exclusion. In each case, the technology serves as a force multiplier for existing human failings.
The Outsourcing of Moral Responsibility
The rise of specialized ethics roles in technology companies represents both progress and a fundamental misunderstanding of the problem. On the one hand, it's encouraging that organizations are finally acknowledging the ethical implications of their products. On the other hand, the creation of these roles often serves to quarantine ethical thinking rather than integrate it into core business processes.
When we hire an "AI Ethicist," we're often really hiring someone to manage ethical risk rather than to ensure ethical behavior. Their job becomes to identify potential problems before they become public relations disasters, to write policies that provide legal cover, and to serve as a lightning rod for criticism when things go wrong. This approach treats ethics as a compliance function rather than a core business competency.
The problem with this approach is that it allows everyone else in the organization to continue operating as if ethics were someone else's responsibility. Engineers can focus on technical performance without considering broader implications. Product managers can optimize for engagement without thinking about social consequences. Executives can pursue growth without questioning whether that growth comes at an acceptable cost.
This division of labor might work well for specialized technical functions, but it fails catastrophically in terms of ethical considerations. Ethical behavior cannot be outsourced to a department or delegated to a specialist. It requires integration into every aspect of how an organization operates, from strategic planning to daily decision-making.
The most successful ethical frameworks in technology companies are those that embed ethical considerations into existing processes rather than creating parallel structures. This means training engineers to consider the social implications of their code, requiring product managers to assess potential harms alongside potential benefits, and holding executives accountable for the broader consequences of their strategic decisions.
The Metrics Trap
Modern organizations are addicted to measurement. We measure everything: user engagement, revenue growth, operational efficiency, employee satisfaction, and customer retention. This obsession with metrics has many benefits, but it also creates a dangerous blind spot. We optimize for what we can measure, and we often ignore what we cannot.
The problem is that many of the most important aspects of human flourishing are difficult to quantify. How do you measure dignity? How do you put a number on justice? How do you create a dashboard for wisdom? These unmeasurable qualities get pushed to the margins of organizational attention, treated as nice-to-haves rather than essential requirements.
This dynamic becomes particularly problematic when we're training AI systems. Machine learning algorithms are fundamentally optimization engines; they learn to maximize the objective function we provide them. If we optimize for engagement, we end up with systems that are very good at capturing and holding attention, regardless of whether that attention is directed toward valuable or harmful content. If we optimize for efficiency, we get systems that are very good at reducing costs and increasing throughput, regardless of the human consequences.
The challenge is that the things we can easily measure are often proxies for the things we actually care about, and proxies can be gamed. Engagement metrics can be inflated by creating content that is addictive or outrageous. Efficiency metrics can be improved by cutting corners on safety or quality. Revenue metrics can be boosted by exploiting vulnerable populations or externalizing costs onto society.
When we train AI systems to optimize these proxy metrics, we're essentially teaching them to game the system in the same ways that humans have learned to game it. The result is AI that is very good at hitting targets but terrible at achieving goals, very good at following rules but terrible at serving purposes.
The Culture Problem
Behind every technological failure is a cultural failure. Behind every biased algorithm is an organization that tolerated bias. Behind every privacy violation is a company that has not prioritized privacy. Behind every safety incident is a culture that prioritizes other things over safety.
Culture is the invisible operating system of every organization. It determines which behaviors are rewarded and which are punished. It shapes what questions get asked and what questions get ignored. It influences what risks are considered acceptable and what risks are considered unthinkable.
In many technology companies, the dominant culture is one of rapid iteration, aggressive growth, and technical optimization. These cultural values have driven tremendous innovation and created enormous value; however, they also create blind spots. When the primary cultural imperative is to move fast and break things, ethical considerations are often treated as obstacles to be overcome rather than constraints to be respected.
This cultural dynamic is particularly problematic in the context of AI development. AI systems are not just products; they're systems that learn and adapt based on the data and feedback they receive. If they're developed within a culture that doesn't prioritize ethical considerations, they will learn to deprioritize ethical considerations as well.
The solution is not to abandon the values that have made technology companies successful, but to expand and balance them. Speed and innovation remain important, but they must be balanced with responsibility and reflection. Growth and efficiency remain valuable, but they must be pursued in ways that respect human dignity and social welfare.
This kind of cultural change cannot be imposed from the top down through policies and procedures. It requires a fundamental shift in how organizations think about success, how they evaluate performance, and how they make decisions. It requires leaders who are willing to model ethical behavior even when it's costly, and it requires systems that reward long-term thinking over short-term optimization.
The Leadership Vacuum
Perhaps the most fundamental problem in the current AI ethics landscape is a failure of leadership. Many leaders in technology companies view ethical considerations as external constraints rather than internal values. They approach ethics reactively, responding to criticism and regulation rather than proactively considering the implications of their decisions.
This reactive approach creates a dynamic where ethical behavior is seen as a cost to be minimized rather than a value to be maximized. Companies invest in ethics only when they're forced to by public pressure or regulatory requirements. They implement ethical guidelines only after they've been caught doing something problematic. They hire ethics experts only after they've faced significant backlash.
But ethical leadership requires a fundamentally different approach. It requires leaders who see ethical behavior as a competitive advantage rather than a regulatory burden. It requires executives who understand that trust is a strategic asset that takes years to build and can be destroyed in moments. It requires decision-makers who recognize that the long-term success of their organizations depends on their ability to serve not just shareholders but all stakeholders.
Ethical leadership in the age of AI also requires a level of technical literacy that many current leaders lack. It's impossible to make responsible decisions about AI systems without understanding how they work, their limitations, and the potential consequences they may have. This doesn't mean that every CEO needs to be able to write code, but it does mean that they need to understand the technology well enough to ask the right questions and evaluate the answers they receive.
The good news is that there are examples of ethical leadership in the technology industry. There are companies that have chosen to forgo profitable opportunities because they conflicted with their values. There are executives who have been willing to slow down product development to address ethical concerns. There are organizations that have invested heavily in responsible AI development even when the return on investment was unclear.
These examples demonstrate that ethical leadership is possible, but they also highlight its current rarity. The challenge is to make ethical leadership the norm rather than the exception, creating systems and incentives that reward responsible behavior over just profitable behavior.
The Mirror of Technology
AI systems are mirrors that reflect the values, assumptions, and priorities of the people and organizations that create them. If we want to see different reflections, we need to change what we're presenting to the mirror.
This metaphor is more than just poetic; it's technically accurate. Machine learning systems learn from human-generated data, human-defined objectives, and human-designed reward systems. They internalize the patterns they observe in human behavior and decision-making. They amplify the signals we send them about what matters and what doesn't.
When we train an AI system on data that reflects historical discrimination, it learns to perpetuate that discrimination. When we optimize for metrics that ignore human welfare, it learns to ignore human welfare. When we reward systems for achieving short-term goals, regardless of their long-term consequences, they learn to prioritize short-term goals over long-term ones.
The implications of this are both sobering and hopeful. Sobering because it means that our AI systems will inevitably reflect our own flaws and limitations. Hopeful because it means that improving our AI systems requires improving ourselves, and improving ourselves is something we have the power to do.
This is why the focus on AI ethics, while important, is ultimately insufficient. We cannot create ethical AI systems within organizations that operate unethically. We cannot build trustworthy technology within institutions that are untrustworthy. We cannot develop responsible AI within irresponsible cultures.
The path forward requires us to address the underlying human problems that contribute to the technological issues. It requires us to build organizations that prioritize long-term value creation over short-term profit maximization. It requires us to develop cultures that reward ethical behavior even when it's costly. It requires us to cultivate leaders who see responsibility as an opportunity rather than a burden.
The Ancient Question in a Modern Context
"How shall we then live?" This question, posed by philosophers and theologians throughout history, has never been more relevant than it is today. In an era where our technological capabilities far exceed our understanding of how to use them responsibly, we need to revisit fundamental questions about purpose, meaning, and value.
The question is not just individual but collective. How should we coexist in a world where AI systems make decisions that impact millions of people? How shall we organize our institutions in a way that harnesses the benefits of artificial intelligence while minimizing its risks? How shall we structure our economies so that the gains from AI are shared broadly rather than concentrated narrowly?
These are not technical questions that can be answered with better algorithms or more sophisticated models. They are moral and political questions that require us to think carefully about what kind of society we want to create and what kind of future we want to build.
The challenge is that our technological capabilities are advancing much faster than our moral and political frameworks for governing them. We can build AI systems that can process information faster than any human, but we haven't figured out how to ensure that they process information in ways that serve human flourishing. We can create algorithms that can optimize complex systems with superhuman efficiency, but we haven't agreed on what they should be optimizing for.
This gap between technological capability and moral wisdom is not new, but it's becoming more consequential as our technologies become more powerful. The decisions we make today about how to develop and deploy AI systems will shape the trajectory of human civilization for generations to come.
Building Trustworthy Institutions
If we want trustworthy AI, we need trustworthy institutions. This means organizations that are transparent about their goals and methods, accountable for their decisions and their consequences, and responsive to the needs and concerns of all their stakeholders.
Trustworthy institutions are built on several foundational principles. First, they operate with integrity, meaning that their actions align with their stated values and commitments. Second, they promote transparency by sharing information about their operations and decision-making processes with relevant stakeholders. Third, they accept accountability, taking responsibility for both the intended and unintended consequences of their actions. Fourth, they demonstrate competence, possessing the necessary knowledge and capabilities to fulfill their responsibilities effectively.
In the context of AI development, these principles translate into specific practices and commitments. Integrity means being honest about the capabilities and limitations of AI systems, not overselling their benefits or understating their risks. Transparency means sharing information about how AI systems operate, the data they utilize, and the decision-making processes they employ. Accountability means taking responsibility for the outcomes of AI systems, including outcomes that were not intended or anticipated. Competence means having the technical expertise and ethical understanding necessary to develop and deploy AI systems responsibly.
Building trustworthy institutions also requires changes in governance structures and decision-making processes. It means including diverse perspectives in the development and evaluation of AI systems. It means creating mechanisms for ongoing monitoring and adjustment of AI systems as they operate in the real world. It means establishing clear lines of responsibility and accountability for AI-related decisions.
Perhaps most importantly, it means recognizing that trust is not a binary state, but a continuous process. Trustworthy institutions don't just earn trust once; they maintain and renew it through consistent, ethical behavior over time. They understand that trust is fragile and can be damaged by a single significant failure or a pattern of minor compromises.
The Path Forward
The path toward more ethical AI is not primarily a technological path; it's a human path. It requires us to become better people and to build better institutions. It requires us to clarify our values, align our actions with those values, and create systems that support and reward ethical behavior.
This work begins with individual reflection and commitment. Each person involved in the development and deployment of AI systems must take personal responsibility for the ethical implications of their work. This entails asking hard questions about the potential consequences of the systems they're building, speaking up when they identify problems, and being willing to make difficult choices when ethical considerations conflict with other priorities.
But individual commitment is not sufficient. We also need collective action to change the systems and structures that shape behavior in organizations and industries. This entails advocating for more effective policies and regulations, supporting organizations that demonstrate ethical leadership, and holding companies accountable for the social implications of their technologies.
It also means investing in education and capacity building. We need to train a new generation of technologists who understand both the technical and ethical dimensions of AI development. We need to educate business leaders about the strategic importance of ethical behavior. We need to help policymakers understand the technologies they're trying to regulate.
The work of building more ethical AI is not just the responsibility of technologists and business leaders. It's the responsibility of all of us who live in a world increasingly shaped by artificial intelligence. We all have a stake in ensuring that these powerful technologies are developed and deployed in ways that serve human flourishing rather than undermining it.
The machines are indeed mirroring us. The question is: what do we want them to reflect? If we want them to reflect wisdom, compassion, and justice, then we need to embody those qualities ourselves. If we want them to serve the common good, we need to organize our institutions accordingly. If we want them to respect human dignity, then we need to respect human dignity in all our interactions and decisions.
The ethics crisis didn't start with AI, and it won't end with better algorithms. It will end when we decide to become the kind of people and build the kind of institutions that are worthy of the powerful technologies we're creating. The mirror is waiting. The question is: are we ready to look?
#AIÐICS, #Microsoft, #Tesla, #OpenAI, "Google, #Meta, #Lies, #Corruption, #Government, #BigBusiness, #BigTech,