How will AI change us?

David Vogel
8 min read3 days ago

--

About 20 years ago Web 2.0, also known as the read/write web arrived, when blogging and social networks evolved from the static first iteration of the web. Many observers predicted a creative explosion — everyone could become a journalist, photographer, writer, or comedian with an audience that could grow to millions. In some ways, this came to pass. There are more self-published books today than ever before, more independent journalists and bloggers, and the appetite for user-generated content on YouTube rivals the professional output from Netflix. But as we all become aspiring creators, we’re having to deal with some unexpected side effects.

Digital echo chambers

It is well documented how Meta and Twitter’s algorithms, designed to blindly optimise for engagement — no matter what kind of engagement — have contributed to a more polarised society. People are stuck in echo chambers that not only reflect their own preferences and biases back at them but promote ever more radical versions of those preferences and biases.

Source: World Economic Forum

Because of these self-reinforcing algorithms, a parent who’s looking on social media about the side effects of medicines may end up believing in a vast vaccine conspiracy designed to turn humans into robots. Or someone who sympathises with social justice issues may end up believing that the world consists of only oppressors and oppressed with no escape and no agency. In this way, the social internet has changed our behaviour: turning us into radicalised and polarised micro-communities, causing us to lose our social anchor and become more cynical and less united.

Looking on the bright side

There’s a counter-narrative to this negative perspective, where you could argue that the social internet provides alternatives to the paternalistic uniformity of opinion that came out of the age of mass media. In this more hopeful story, we are in an ideological interregnum where alternative truths are competing and colliding in a messy way, but also creating more room for original thought.

In The Better Angels of Our Nature, Stephen Pinker argues that humanity is, over many centuries but at ever greater speed, progressing from ignorance and religious dogma towards an enlightened future of individual morality and agency. If he’s right, then perhaps our current predicament is just a necessary step towards freeing us from the intellectual tyranny of ‘experts’ who have often done more harm than good while supposedly shepherding the stupid masses towards what is good for them. Our current confusion might be temporary but necessary for progress. In this scenario the internet is, in fits and starts, altering our behaviour for the better.

In the famous TED video from 2006, Hans Rosling showed how much more healthy and wealthy the world has become over the past 100 years or so

We have been here before

A few hundred years ago another novel technology: the printing press, provided access to information and the ability to quickly disseminate new ideas by just about anyone, much in the same way the internet is doing today. The parallel between the printing press and the internet is almost a cliche these days, but it’s still striking to consider the similarities and the impact on our behaviour: ordinary people had their core beliefs challenged by access to information (in their case, the translated and printed bible) leading to powerful new dogmas being introduced (Protestantism) and people retreating into hostile factions based on religious or national identity. This was in many ways a truly horrible time filled with mass hysteria, prejudice, terrorism, and bloodshed. But it ultimately gave way to the Enlightenment, where the power of observation and independent thought kickstarted the modern era. Whatever you think of the time we live in, it’s hard to argue that the period before the printing press, with its superstition, disease, lack of education, low life expectancy, and constant violence and repression, is worth returning to.

AI and human behaviour: Potential outcomes

So, we see that technology impacts humans and changes their behaviour. In addition, some would argue that despite the initial upheaval this can cause, these behavioural changes can lead to improved outcomes for humanity. In the dawning age of AI, an interesting question is: how will this disruptive technology impact our behaviour — and will this lead to outcomes of the witch-burning kind or the disease-curing kind?

As Niels Bohr famously noted, predictions are hard, especially about the future. Instead of trying to predict which way the chips will fall, I will try to set up some logical scenarios and leave it up to you to decide if you believe in the negative or positive potential outcome, or a mixture of both.

Lazy humans in WALL-E. Credit: Pixar

We will become less detail-oriented and better at ‘big picture thinking’/We will become dumber and lazier

The most obvious thing AI will do is automate away things that are (relatively) easy. We are already frustratingly familiar with autocomplete, both a blessing and a (sometimes hilarious) curse when the text seems to take on a life of its own. AI is basically autocomplete on steroids, with ever-increasing chunks of whatever we want to create taken over by AI based on an ever more sophisticated understanding of our intent.

But as everyone who has wrestled with autocomplete can attest to, sometimes the lines between intent and autocomplete can become blurred. Sometimes whatever your phone’s keyboard suggests is ‘good enough’ to not sacrifice accuracy for speed. This trade-off will be a central theme of human-AI interaction when Apple and others roll out their AI integrations. Communication, and increasingly any form of creation, will be a negotiation between what you set out to do and what the AI (based on generalised human behaviour) predicts you will do.

We will increasingly gesture at meaning and leave it to AI to fill in the details. The better the results, the more we will trust the system and the more autonomy we will give it, in the same way we have ceded spatial awareness and wayfinding to Google Maps, both individually and collectively. We think it is finding us the best route from A to B, but it’s actually managing traffic as it tries to optimise the route of many different vehicles across a number of different routes.

Google Maps’ benefits are clear to everyone. Instead of poring over paper maps, we tend to optimise for scenic routes, strategic stopovers, or fuel efficiency. With AI soon inside everything, not just our maps, we can focus on optimising outcomes, not details. Arguably this will allow us to think bigger and become more creative and vision-led as we learn to trust AI to both give us instant feedback on feasibility, as well as manage many of the details in execution. On the flip side, perhaps it will encourage us to stop thinking for ourselves and trust AI decision-making more and more until our society collapses into a recursive blandness of decision-making, based on a lack of individual desire driving outcomes.

We will learn to embrace ambiguity/We will further retreat into our bubbles

One of the current challenges of AI is its lack of universality. The current crop of large language models (LLMs) is trained on data derived from books and social media, which underrepresents some significant portion of the global population. This introduces bias and doesn’t account for the fact that some information (say, the Universal Declaration of Human Rights) is more ‘valuable’ than others (a Reddit thread on human rights).

However, the problem of ‘bias’ isn’t easily solved: one person’s norm is another person’s bias. The vision of Meta’s Chief AI Scientist, Yann LeCun, is that the open-source AI movement becomes a viable alternative or even a challenger for proprietary LLMs. If he is right, there will be a proliferation of narrow AIs with specialised skills and specific audiences who will embed their unique ‘bias’ into the model. This would create a diversity of perspectives across many different AIs, as opposed to a ‘sanitised, bias-free’ LLM along the lines of what Google is trying to create.

This dichotomy between singular, neutral, and godlike AI versus messy, distributed, and multi-faceted narrow AIs (plural) is a central theme of the current discussion about AI. Personally, I believe that the godlike AI is a fantasy based on a mix of religious and sci-fi themes and will never come to pass, or at least not for the foreseeable future. While it’s romantic to think about the former, it’s much more interesting to focus on the latter.

Kevin Kelly compared AI to electricity; Packy McCormack recently compared it to oil. They’re both making the same point. AI is a technology that, like all other technologies before it, enables an acceleration of human goals. As such, it will take human bias with it. The question is whether we will gradually learn to work with and around other people’s biases in order to achieve communal goals, or if our current partisan retreat is only going to get worse.

We will become more materialistic/We will become more spiritual

If AI is electricity, or oil, what that means is that it will make many things that are currently hard to do, very easy to do. In turn it will create opportunities to do things we currently don’t even consider doing. In a future where all these new opportunities are not captured by a single country, government or privileged group, this will lead to vastly better quality of life for all of us. Even if that wealth is not evenly distributed, as today’s wealth isn’t, this rising tide will still lift all boats, just as the industrial and agrarian revolutions have done. From the big to the small, AI will automate away an unimaginable amount of friction. It can potentially transform medicine through a new wave of hyper personalised bio-engineering, and combat climate change through materials design, climate modelling and engineering solutions. AI can transform transport including self driving which in turn will change the way we think about cities. But new hyper clusters can also create new WMDs that are more powerful (or more effective defensively) than nukes.

This is not the first time people have imagined grand futures based on a new technology which didn’t really come to pass. In the 1920s, the Futurists and others imagined technology creating utopian societies, and instead they got the devastation of two world wars. But it is still mind-blowing to think about how the world looks today compared to the pre-industrial, agrarian and feudal societies, which is less than 200 years ago… As the futurist Roy Amara said ‘We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.’

A utopian future? Soucre: ChatGPT

Once we talk about changes of this magnitude, predictions about human behaviour become even harder to make. But when material wealth increases, the expression of social behaviour changes. Belonging to a group when resources are scarce can help you fight off others or collaborate to optimise calorific intake. Belonging to a group when resources are abundant opens up a more spiritual dimension of collaboration and play: to create art, engage in ritual behaviour, form complex relationships. Perhaps the abundance AI creates will help us become less focused on the hardship of reality, in turn softening our interactions to allow for more fragile social interactions. Taking the negative side of the argument, we have also seen how much more needy people can become as a result of having to deal less with hardship, how much more entitled and hungry for ever higher levels of personal status, wealth and comfort.

If AI helps us move towards a post-scarcity society, a prediction as to whether our behaviour becomes more decadent or more inclusive and empathic (or both) will depend on your assessment of how the past 200 years have impacted our behaviour. Expect whatever you think that trend is, to accelerate.

--

--

David Vogel

Executive Experience Director at code d’azur. Loves tech, culture, politics, food.