We live in an age where building products has never been easier. At tech events, I hear the same story repeatedly: "We built this in a couple of days," or "Three months from idea to launch." The generative AI boom has democratised creation in ways previous generations could barely imagine. But this ease of building has surfaced a more profound question: If anyone can build anything, what should we build?

The Question That Changed Everything

I attended an event where a founder presented their automation tool - slick, efficient, impressive. Then someone in the audience asked a question that has haunted me since:

I know your tool automates workflows efficiently. But do you realise what your tool does might be someone's entire job? Have you considered the social impact you're creating?

The room fell silent.

As a product manager, I'm trained to ask: What user needs does this serve? What problems does it solve? What value does it create? For a long time, I was unreservedly excited about AI's potential to build faster, ship smarter, iterate endlessly. But lately, I've found myself feeling something unexpected: sadness.

Too many products I encounter seem built purely for the sake of building. Not because they solve meaningful problems. Not because they serve a genuine mission. But because they can be built, and because building fast attracts capital.

The Capitalist Engine and the AI Fuel

Our world runs on capital, and investors reward polish and speed. Products built for marginal efficiency gains or that simply replicate what already exists can still attract significant funding and scale rapidly. I understand the logic: technology needs to be tested, popularised, adopted. Early movers reap the rewards.

But something nags at me. Products should ultimately create positive social value.

I'm not interested in ideological debates about whether capitalism is optimal - every system has trade-offs. What concerns me is that AI has become both the means and the end of capitalism. It's the tool for creating wealth and the justification for pursuing it, often without deeper consideration of impact. This simultaneity thrills and troubles me in equal measure.

History's Echo, Accelerated

When the First Industrial Revolution arrived, machinery replaced manual labor. Workers protested. Governments responded with harsh policies. There were fights, deaths, profound disruption. That era feels distant, especially to someone like me, born in the late 20th century. We live in what we call a "peaceful" world.

Yet wars rage on. Fascism resurfaces. Perhaps history returns faster than we think.

Eventually, humanity adapted. Trade unions formed. Legislation passed. Living standards rose - at least in aggregate, over time. But the pain of transition was real. Specific jobs vanished. People suffered. The fact that society ultimately thrived doesn't erase the cost borne by those caught in the transformation.

History moves in circles, and technology has shortened the distance between each turn. We're entering another loop now, but faster, more compressed. And I find myself standing near the front of it.

The Ethical Puzzle We Face

I don't have answers about what comes next - what economic adaptations will emerge, what political organising will take shape, what forms of retraining will develop, what cultural adjustments we'll be asked to make.

What I do know is this: We're choosing what to build. That choice matters.

Every product we create with AI sends ripples into the world. Some will genuinely help people. Others will displace livelihoods without offering meaningful alternatives. Some will solve real problems. Others will simply extract value because they can.

This isn't a call to stop building. It's a call to build with intention.

A Hope, and a Commitment

I remain cautiously optimistic that the same technology driving this disruption might also ease the pain of transition. Perhaps AI can help us retrain faster, identify new opportunities sooner, distribute benefits more broadly. Perhaps.

But optimism alone isn't enough. The "survival of the fittest" pressure is real, and I feel it.

So here's what I can offer: I will continue building, but I will ask harder questions. Not just "Can we build this?" but "Should we build this?" Not just "Will this scale?" but "Who will this serve, and who will it harm?"

I hope others building in this space will ask the same.

This isn't about declaring what's right or wrong. It's about recognizing that we're at an inflection point, and the products we choose to build will shape the world we inherit. That responsibility deserves our deepest consideration.

Keep Reading