AI Is the Best Opportunity to Fix Society, if We Do Not Blow It

Every major leap in technology comes with a promise. So far, those promises have mostly been broken. This time could be different, but only if we are paying attention.

There is a pothole near my house. It has been there for months. The council patched it last winter, cheaply and quickly, just good enough to tick a box. Now it is back, worse than before. Somewhere in a spreadsheet, this counts as a maintenance success.

This is a small, mundane example. But it captures something that runs through almost every public institution I can think of: the people making decisions are not the ones living with the consequences. The incentives are all wrong. They optimise for appearances, not outcomes. They think in four year cycles, not forty year ones.

I have been thinking a lot about whether AI changes this. Not just whether it makes us more productive, but whether it gives us the tools to build genuinely better systems where the incentives actually align with the outcomes we want.

I’m a software developer. I have been watching AI go from party trick to career threat in the space of about eighteen months, so I am more than a little invested in understanding where this goes. But this is not a developer’s newsletter. This is me trying to think through something I believe matters for everyone.

The broken promise of technology

When email arrived, the promise was time. Near instant communication instead of waiting days for a letter. Surely we would all get our Friday afternoons back? What actually happened was the opposite. The time savings got absorbed, the pace of work increased to fill them, and the benefits went to the few rather than the many.

This is the pattern. New technology arrives. Productivity jumps. The gains concentrate at the top while everyone else runs faster just to keep up. There is a reason that in 1970 a single income could support a family and get you onto the property ladder, and today two good incomes often cannot.

We do not need to agree on the exact cause of that shift. What is hard to argue with is the shape of it: technology keeps advancing, living standards for most people keep getting harder, and something somewhere is not working.

AI is the biggest version of this pattern we have ever seen. And it is moving faster than anything before it.

Why this time is different, and more urgent

Previous waves of automation tended to displace specific industries over decades. The industrial revolution put handloom weavers out of work, but it also created entirely new industries that did not exist before. There was time to adapt.

What is different now is the breadth and speed. AI is not targeting one industry. It is touching nearly every knowledge based job simultaneously, and the pace of improvement is unlike anything we have seen. Eighteen months ago most businesses could not find a productive use for it. Now, if you are not using it, you are likely losing significant ground to competitors who are. The curve is steep and it is not flattening.

But here is where I want to offer something other than fear. If the gains from AI are vast enough that the basics of life, energy, food, manufactured goods and services become genuinely cheap to produce, then the old zero sum fights over redistribution start to lose some of their force. In a world of material abundance, the argument shifts from “how do we divide a fixed pie” to “how do we make sure everyone actually gets a slice.” That is a more solvable problem. And it opens space for more voluntary, experimental ways of organising communities and public services, if we build the right foundations now.

A quick note on capitalism, because it always comes up

I grew up being told the rich did not deserve it, that they could not possibly have worked that much harder than everyone else. It was a seductive idea, and I held onto it for a while. But the more I looked at it, the more I ran into a problem: systems that try to enforce equality of outcome tend to remove the incentives that drive people to create and improve things in the first place. History has been pretty conclusive on this.

What most people actually resent, I think, is not free markets. It is the version we actually live under, where large corporations can lobby to change the rules in their favour, where monopolies form and strangle competition, where the powerful get to rewrite the game mid match. That is not capitalism failing; it is corruption winning.

“Show me the incentive and I will show you the outcome.”Charlie Munger

The question AI forces us to ask is: what are the incentives in the systems we are about to hand enormous power to? And can we design better ones?

What better systems might actually look like

Back to the pothole. The problem is not that nobody cares about roads. Plenty of people care; they drive over them every day. The problem is that the people who care most have no power, and the people with power have no skin in the game.

Imagine instead a local community that takes direct ownership of maintaining their roads. Not a council, not a contractor with a clipboard, but actual residents with transparent performance data, accountable to each other, and incentivised to invest for the long term rather than reaching for the cheapest short term patch. They might choose better materials. There are roads in countries with worse weather than ours that last decades longer. They might use sensors to catch surface deterioration before it becomes a pothole. They might find that a higher upfront investment saves significant money over time, with those savings shared back to the community.

The technology to do this exists today. It’s called a DAO, short for Decentralised Autonomous Organisation, which sounds intimidating but is really just a fancy name for a community group that runs itself using transparent software instead of a committee room.


What is a DAO, in plain English?

A DAO is a group that organises itself using transparent rules written in code. Decisions are made collectively, money is held in a shared treasury, and spending only happens when the group votes and agrees. Every transaction is recorded publicly so anyone can check it.

Voting power is usually weighted by how many governance tokens you hold, similar to owning shares in a company. This gives more influence to those with more skin in the game. To stop one wealthy individual from simply buying control, well designed DAOs include safeguards. One common approach is quadratic voting, where the cost of extra votes rises sharply, making it expensive for a single person to dominate. Others use time locked tokens that reward long term commitment, or hybrid systems that also reward actual contributions and work done.

It is not a perfect system, and many still end up with power drifting toward a small group. But the best ones continuously improve their own rules, openly and in public, which is more than most councils manage.


AI makes this practical in ways that were not possible before. Intelligent monitoring agents can watch road conditions in real time, suggest the best materials for local conditions, and predict where maintenance is needed before a pothole forms. The DAO handles the governance side, who approves spending, how savings are distributed, and what the performance targets are, all of it visible to anyone who wants to look. Successful approaches in one area can be copied freely by others. Failed experiments can be dropped without years of bureaucratic inertia holding them in place.

And this is not pure fantasy. The legal door is already opening. The English Devolution and Community Empowerment Bill, currently progressing through Parliament, is designed to push power back toward local communities. It strengthens neighbourhood governance and introduces an improved Community Right to Buy, giving community groups a genuine first refusal mechanism and a real route to take control of local assets. This creates practical legal cover for councils to run exactly the kind of local experiment a community road group could use to get started.

The same logic applies to almost any public function where the people affected by decisions are not the ones making them.

The real risk is not a robot uprising

People worry about AI in cinematic terms: the system that becomes self aware and turns on its creators, the algorithm that decides humans are the problem. These concerns are not entirely silly, but they distract from a more immediate one.

The real risk is that AI makes existing concentrations of power more powerful. That the same broken pattern plays out again, faster and at greater scale. That the productivity gains go to the few, the institutions that already dominate get smarter and more efficient at domination, and the window to build something different closes before most people realised it was open.

“Power tends to corrupt, and absolute power corrupts absolutely.”Lord Acton

The safer path forward is not racing toward a single all knowing AI. It is building AI as networks of narrow, specialised tools, coordinated by transparent and human written rules, with real human oversight at every meaningful decision point. Useful agents working together, not a digital god with no off switch.

The antidote to centralised AI power is the same as the antidote to centralised political power: spread it out. Transparent records. Genuine exit options. The ability for any group to take a working model, copy it, and adapt it for their own needs. Even Bitcoin fits this picture, not as a get rich scheme, but as a way for communities to hold and move money without needing to trust a bank or government intermediary. The more we make power narrow, local, and accountable, the less attractive it becomes as a prize for people with the worst intentions.

The window is open. It will not stay that way.

I do not have a complete blueprint. Nobody does. But I think the more people are involved in these conversations, not just technologists, not just policymakers, but ordinary people who will actually live with the outcomes, the better our chances of landing somewhere good.

The technology already exists. The ideas are being tested in small experiments around the world. What is missing is broader awareness of what is possible, and broader pressure to build it in ways that serve everyone rather than just the few who currently have the loudest voice in the room.

That is what this series is about. We will look at how communities are already running these experiments, how AI and community governance could work together in practice, and what it might take to get from here to somewhere genuinely better, without being naive about the obstacles.

Starting, as promised, with the pothole.

Next in the series: Network states and the archipelago model, how online first communities with physical meeting points might offer a genuinely new way to organise society without waiting for governments to catch up.