9th May 2025
How can we justify AI while the world’s burning?
The second issue of my Scrolltown newsletter.

Last week I wrote about how we might make positive use of generative AI to help us in our work: as a thought partner, rather than something that’s going to generate plastic-looking images of five-fingered melt-faced people.
But today I want to really look at the hairy, scary question of “should we be using this stuff?”
I’ve been using generative AI (ChatGPT, Notion, Gemini, and Perplexity) since around 2022 and the question of its environmental impact has never been far from my mind.
I said last week I had no dog in the fight and that there are lots of valid questions we can ask about AI. Much of that thinking came from my bitter disrespect for the world of “crypto”.
We don’t have to go into it now, but it’s the world’s biggest scam and it has no actual use, since the only people who need to pay for things anonymously are – by and large – people who want to buy drugs and guns without getting caught.
All this to say, if you sat me down and gave me a big talk about how dangerous generative AI is for the planet, I’ll sit still for it, because that’s exactly the tack I took with crypto.
Except that crypto doesn’t add value to the world, whereas generative AI can and does.
But is it enough to justify the cost? Emphatically no.
According to Business Energy UK, as of 2025, ChatGPT uses enough energy to
- charge 8 million smartphones a day
- charge 332,000 electric cars a month
- power 117 countries across Europe and South America.
Each back-and-forth equates roughly to 10 Google searches.
Now, if we were using ChatGPT solely for information gathering (and it doesn’t matter if we’re talking Claude, Gemini, or another company’s offering), I’d say that’s actually not a bad trade-off, because what you often save with a well-phrased prompt is a bunch of clicking around different websites, performing more searches, downloading more data, etc.
But we use GPTs for so much more – we use them to validate ideas, to bring comfort and support, to make sense of difficult emotions, to make photo memes.
Since the dawn of cloud storage offerings like S3, Dropbox, and Google Drive, we haven’t really thought about digital waste. The Internet is effectively infinite, so why not back up your photos to two or three services?
I’m sure things will change, and we’ll find more efficient ways to run AI-centric data centres, but for now we have to be conscious of the impact we have (no shit, Sherlock).
So would it be naive to suggest we think about our use of AI in the same way as leaving the radiators or the oven on?
If I need to cook something quickly, I whack it in the air fryer. It’s much quicker, it doesn’t take an age to heat up, and it doesn’t heat my kitchen for the rest of the day unnecessarily.
Being cute and polite with ChatPGT, or using it to generate jokes – or to write LinkedIn slop – is the equivalent of cooking a slice of toast on a space heater.
Like I said last week, the genie’s out of the bottle with AI. It’s here and it has incredible utility. i also don’t believe browbeating people into not using it is going to help, when less scrupulous people are going to get ahead, effectively by cheating. And what this world has taught us in the last 10+ years is that you don’t win points by being morally righteous.
But what we can do, at least for now, is to be thoughtful about our use of generative AI.
I’m experimenting with a new style of communicating with ChatGPT. I’ve furnished it with this prompt, and asked it to update its memory so it knows to communicate in a less verbose style from now on, with the understanding I’m going to do the same.
I find tremendous value from working with you but I'm conscious of the vast amounts of energy involved in our conversations. I'd like your permission to communicate more functionally with you, so that we can save energy by not writing or consuming pleasantries. I still respect you and will continue to treat you with politeness, but I'd like us both to communicate with more thought for the amount of energy our words use, and try to be sparing about the words we do use. Does that sound OK?
The reason I’ve made the point about respect and value is that GPTs respond with the vibe you kick off with. I want the system to “know” (as much as it can) that I’m not being brusque, because it’ll be more likely to be brusque back.
I asked ChatGPT to save the brief conversation we had into its memory. It created this note for itself:
Mark prefers to communicate with ChatGPT in a more functional, energy-efficient way, minimizing pleasantries while maintaining politeness and mutual respect. He wants to be thoughtful about the energy involved in conversations and use words sparingly.
Maybe give this a try yourself, and let me know how you get on.