Who is Shaping the Future? 8 Billion People or 8 Billionaires?

In CPR Connects by camerontrimble1 Comment

by: Rev. Cameron Trimble, CEO of Convergence

In 2012, I was asked to sit on a panel at a “Tech for Good” conference in Amsterdam, where I was invited to respond to ethical questions raised by the audience about artificial intelligence (AI). At that point, AI was only a figment of our imagination. We were more obsessed with questions about blockchain and digital IDs. However, as a UCC pastor, theologian and leadership consultant, my curiosity was piqued by the questions we were (and were not) asking, even then. Echoing in my mind was the call of the Dalai Lama who said, “Our prime purpose in this life is to help others. And if you can’t help them, at least don’t hurt them.”

Fast forward 12 years, and here we are in the early years of mass deployment of AI tools that will forever change how we work, live and relate as humans on earth. I don’t say that lightly. 

The rise of artificial intelligence (AI) is rapidly reshaping the world—affecting how we understand what it means to be human. As with many transformative technologies, AI’s power is concentrated in the hands of a few. It’s hard to ignore the fact that much of the development and ownership of AI is controlled by an elite group of billionaires and corporations. They have given this gift to the rest of us for free, to use our questions, data, and ideas as food for algorithmic learning. Once again, we are the product. 

This raises an urgent question: Will the future be shaped by the needs of the 8 billion people on this planet—or by the priorities of 8 billionaires?

AI offers incredible opportunities to improve lives, but it also presents profound challenges, especially around issues of equity, ethics, and ownership. In this moment, people of faith and moral conviction—those we might call Advocates for the Common Good—have a unique responsibility to ask different kinds of questions. Who is benefiting from these technologies? Who is excluded from the conversation? And most importantly: How does AI shape the common good?

Photo by David Suarez on Unsplash

The Problem of Ownership: Who Controls the Future?

At the heart of the AI revolution is a troubling imbalance: a small number of tech giants hold immense power over its development and deployment. These companies—often led by a handful of billionaire entrepreneurs—are driving the direction of AI in ways that reflect their values, assumptions, and interests. The issue isn’t just the accumulation of wealth but the accumulation of influence. What happens to democracy when technology that shapes public life is controlled by so few?

This is particularly concerning when we consider the biases embedded in AI systems. Much of the data that trains AI reflects white, Western worldviews, reinforcing existing social inequalities. As AI increasingly becomes part of everything from hiring decisions to healthcare delivery, the systems risk marginalizing those already on the edges—communities of color, non-Western cultures, and people with fewer resources.

Advocates for the Common Good: Asking the Right Questions

This moment demands the voices of Advocates for the Common Good—those who can offer moral clarity and long-term perspective. Faith communities are uniquely positioned to raise questions beyond the bottom line and technological innovation. They are called to ask:

  • Who benefits from this technology?
  • What values are being encoded into these algorithms?
  • What is the environmental impact of AI energy needs? 
  • Does AI serve the flourishing of all, or only the privileged few?
  • How do these technologies shape relationships, justice, and care for the most vulnerable?

Thomas Aquinas reminds us that, “The good of the human community is greater and more divine than that of the individual.” This insight compels us to ask whether AI is advancing the collective well-being or entrenching systems of inequality. The future belongs not to those who build technology for their own gain, but to those who ensure it serves the flourishing of all.

Practical Steps Congregations Can Take to Shape the Future of AI

In the rapidly evolving world of artificial intelligence, congregations and individuals within them can play an important role in ensuring that AI development aligns with justice, equity, and the common good. Here are some practical actions that faith communities can take to engage thoughtfully with AI and make a positive impact.

  • Learn and Reflect Together
    • Host study groups, forums, or workshops on the ethical and social implications of AI.
    • Explore resources and books by thought leaders like Shoshana Zuboff (The Age of Surveillance Capitalism) or Cathy O’Neil (Weapons of Math Destruction)
    • Reflect on theological questions about AI, such as: How does this technology reflect or distort our values? What does it mean to love our neighbor in a world shaped by AI?
  • Advocate for Fair and Transparent Technology
    • Join or support advocacy efforts that promote transparency and accountability in AI development. Encourage congregants to stay informed about public policy efforts around AI and technology regulation.
    • Advocate for laws that prevent algorithmic discrimination and ensure equitable access to technology for all communities.
  • Support Ethical Companies and Technologies
    • Use purchasing power intentionally by choosing products and services from companies committed to ethical technology development.
    • Encourage ethical practices within the congregation, such as avoiding technologies that rely on surveillance or exploitative data collection.
  • Promote Diversity in Technology Spaces
    • Advocate for the inclusion of underrepresented voices—particularly women, people of color, and non-Western perspectives—in conversations about AI.
    • Support initiatives that offer technology education to marginalized communities, helping them gain access to careers in tech and AI.
  • Use AI Thoughtfully in Congregational Life
    • Congregations can explore using AI tools for administrative tasks, communication, or outreach—but with intention and care. For example, automation tools might enhance efficiency, but personal connections should remain at the heart of community life.
    • Discuss as a congregation what boundaries to set around technology use, ensuring that it supports rather than replaces meaningful relationships.
  • Engage Your Community
    • Partner with local schools, universities, and nonprofits that are exploring ethical AI. Participate in community conversations about the role of technology in education, healthcare, and social services.
    • Offer to host events where the wider community can discuss the future of AI and explore ways to ensure it serves the public good.
  • Ask the Hard Questions Together
    • Who benefits from this technology, and who is left behind?
    • How can AI enhance our commitment to justice and equity?
    • What does responsible use of technology look like within our community?
    • How does this technology affect our relationships with one another?

By engaging with AI thoughtfully and aligning technology use with shared values, congregations can become moral leaders in shaping the future of technology. The goal isn’t to reject AI but to ensure that it serves the well-being of all people—helping to build a future where technology promotes justice, compassion, and human flourishing.

A Call for Ethical Leadership

Faith communities and their leaders are uniquely positioned to advocate for AI that promotes the common good. This involves demanding transparency, accountability, and fairness in how AI is designed, developed, and used. It also requires challenging the misconception that technology is neutral—AI systems are shaped by the values, priorities, and biases of those who create them. Without careful oversight, these technologies risk perpetuating inequality and injustice.

As Pope Francis reminds us in Laudato Si’: On Care for Our Common Home, “Technology, when detached from ethics, will not easily be able to limit its own power. We need a humanism capable of bringing together the different fields of knowledge, including economics, for the sake of a more integral and integrating vision.”

Advocates for the Common Good have a vital role in guiding these efforts. They must insist that people—not profits—remain at the heart of technological progress. AI’s potential should serve the well-being of all, not just the interests of a privileged few. By ensuring that technology aligns with human values—such as justice, equity, and compassion—faith communities can help shape a future that uplifts and includes everyone.

Shaping a Future That Belongs to All of Us

The future is being written by the technologies we create today. But who writes that future matters. Will it be shaped by billionaires and tech giants whose priorities may not align with the common good? Or will it reflect the hopes and needs of the many—8 billion people whose lives are impacted by these innovations?

As Advocates for the Common Good, people of faith are called to ask better questions—not just about what AI can do, but about what it should do. How can it be a tool for justice, connection, and care? How can it help build a world where every person’s dignity is honored?The future of AI isn’t set in stone. It can be shaped by the wisdom, compassion, and courage of those who care deeply about the common good. The question isn’t just what AI can achieve—but who we will choose to become as we build the future together.

If you are interested in talking with me about how Convergence can help you step into futuring with your congregation, I encourage you to reach out. We have accompanied many, just like you, through this impactful process.

Comments

  1. Hi Cameron,

    Please stop trying to evade moral injury with your “clean hands” approach. Responsible AI will require viable whistleblower disclosure and protection systems – which don’t exist and to which the Church takes no exception.

    What would you say, other than “good luck, you’ll get your reward in heaven,” to any AI worker in your faith community who came to you concerned about the difference between his billionaire employer was saying for public consumption about “safe” AI and what was actually happening to maximize his/her profit?

    My unresolved whistleblower disclosures which I have been making for decades, to you and many other Christian religious professionals, have troubling implications for nuclear weapon material security and much else relevant to American health, safety, security and welfare. My futile efforts, now including four trips to Supreme Court, have been futile, in essential part, due to the willful bystanding of Christian (and other) religious professionals and the institutions they lead. and I’m told it is because it is too risky for their professional or institutional interests to do other than bystand and say “good luck, you’ll get your reward in heaven..”

Leave a Comment