Just because we can
A few newsletters ago I promised to share some of my ponderings about the ethics of AI.
A historical parallel I've kept coming back to is the technological progress brought by the advent of steam power and the upheaval that followed. If we hadn't dug all that coal and oil out of the ground in the first place, we wouldn't have a warming climate, but we wouldn't have advanced so quickly.
Weighing up progress versus consequences, speed versus sustainability, individual advantage versus collective impact is easier to do in hindsight but humans do have the option to apply the lessons from the past to the choices of the present and future.
Two particular issues emerge when I talk to charity leaders about AI:
Sensitive data privacy. When you feed beneficiary information into ChatGPT or Copilot, where does that data go? For charities holding information about vulnerable people, this isn't abstract. This is core business. It is safeguarding. It is trust.
Being left behind. But, if everyone else is using AI to work faster, produce more content, respond more quickly won't we become irrelevant?
The climate lesson we're in danger of ignoring
When coal powered the Stockton and Darlington railway in 1825, no one was thinking about atmospheric carbon in 2025. They were thinking: faster production, economic growth, competitive advantage.
The consequences, if they were even considered back then, seemed distant, abstract, someone else's problem.
Right now, AI data centres are consuming staggering amounts of energy and water. Yet we're racing ahead because competitive pressure feels more immediate than climate consequences.
Did we learn nothing?
Progress for whom?
We never asked whether we should. We only asked whether we could.
And more importantly: we didn't pay attention to who benefits and who pays the price.
With coal and oil, industrialised nations and shareholders benefited. Colonised nations, unborn generations, and vulnerable communities paid.
With AI, tech companies and well-resourced organisations benefit. Communities whose data trains models without consent, gig economy workers identifying harmful imagery, junior workers whose jobs are automated, and regions facing climate impacts pay.
The choice is structured to make one option feel inevitable. Everyone else is doing it, it's faster and easier, you'll be left behind, the tool is free, technical complexity obscures ethical implications.
This is exactly how we ended up with a fossil fuel economy. I know all that and yet I'm using AI tools too but I am trying to use them thoughtfully. Is that enough though?
A different way forward?
Some possible questions to ask:
What are we optimising for? Speed and efficiency? Or depth, quality, relationships, trust?
What are the potential harms? Data breaches. Erosion of critical thinking. Dependence on tools we don't control. Environmental costs. Perpetuating biases.
Who bears the risk? If something goes wrong, who suffers?
Are there alternatives? What if we improved human systems rather than added AI?
Can we start small and reversibly? Can we stop if we discover harms?
Some practical steps
Stay in experiment mode. Small-scale pilots with clear evaluation criteria. Reversible decisions.
Decision criteria grounded in mission. Score each AI application against your values: data privacy, critical thinking, environmental sustainability, reducing inequalities, enhancing human relationships. Only proceed if it passes your threshold.
Invest in privacy-preserving approaches. Local models where data never leaves your systems. Privacy-focused tools. Clear governance policies. Transparency with beneficiaries.
Of course this is harder and more expensive. But what's the alternative? Compromise safeguarding for convenience?
Build ethics into your ongoing risk register and review it regularly. What unintended consequences have emerged? How is this changing our culture? Who benefits and who pays?
Connect with others choosing differently. There's power in charities collectively saying: "We're choosing a different definition of progress. We're prioritising depth over speed, relationships over efficiency, sustainability over scale."
The question that matters
Can we collectively choose restraint in the face of technological possibility and competitive pressure?
With fossil fuels, we couldn't. We kept burning right up until we'd irrevocably altered the planet's climate and we're still burning now.
Now we're doing it again with AI.
The steam engine is done. The warming climate is here. But the choice about AI? That one is still being made.
If in 20 years we discover this created harms we didn't anticipate, will we be able to say we proceeded with appropriate caution? Or will we say we knew better but did it anyway?
Just because we can, doesn't mean we should
Any of this resonate for you? Let me know.
Useful links 🔗
Dignity at Work and the AI Revolution from the TUC.
Frequently asked questions about the use of AI across performing arts and entertainment from Equity union.
AI Equity: ensuring access for all from the Gates Foundation
AI Safety and the Global Majority from the Brookings Institute
Day of the Week 📆
The United Nations is 80 this year. Here's a little clip about its formation in the aftermath of the 1939-1945 war.
"We, the peoples of the United Nations, determined to save succeeding generations from the scourge of war..."
What am I watching? 👀
Open AI made Sora. What's going to happen to creativity? Yikes!
What am I reading? 📚
I am reading another book about the dark side of AI, Code Dependent - Living in the Shadow of AI by Madhumita Murgia which explores the impacts of machines and helping machines learn on human beings, often the most vulnerable and easy to exploit.
And some positivity to close with...
What am I listening to?👂
I was listening to Malian band, Songhoy Blues' album, Songs in Exile, as part of our 1001 albums to listen to before you die.
Here's a song from their new album, Heritage.
I'd really like to go to Mali some day.
Joy-giving things 😍
Here's the Raise the Love flag.
Have a great weekend
Lucy ❤️
P.S. The newsletter is taking a little break for a couple of weeks while I get going with my coaching qualification - hence the bumper edition today. See you next month x
ChangeOut is created by Lucy Caldicott. You can find more about my work at ChangeOut.org. If you’re looking to have a chat about culture, leadership, purpose, equity, or a facilitated team discussion about any of those things, get in touch. You can also find me on Bluesky, Instagram, and LinkedIn.
🎺🎺🎺 YouTube 🎺🎺🎺
If you like what you read and you'd like to show your appreciation in cash, you can do that here. I'd be very grateful!
ChangeOut - Leadership . Purpose . Impact Newsletter
Join the newsletter to receive the latest updates in your inbox.