UCOT 2018: A conference on the Unintended Consequences of Technology
It is time for the human economy
Imagine a world where all existing technology has been deployed specifically in order to improve human mental, physical and spiritual welfare, in line with the planet’s ecological limitations. What would it look like?
The UCOT conference was founded by the Ted-Fellow, Young Global Leader, Forbes 30Under30 Chris Ategeka, a serial entrepreneur with contagious energy and incredible humility. A gathering of curious minds and pioneering leaders in the field, the conference sought to facilitate authentic discussions around the uncomfortable stuff: where exponential technology development is not serving humanity’s best interests. While the conference was titled UCOT, ‘Unintended Consequences of Technology’, it became clearer throughout the day that the term ‘unintended’ is problematic, and that there is a more fundamental question to be asked: are the intended consequences of exponential technology even getting us where we want to go?
Tech: the ‘double-edged sword’
Today’s technology is a double-edged sword. I felt this especially after giving up social media for 10 weeks while travelling this summer: I felt calmer, less addicted to my phone, and more present with my surroundings. But I also missed my friends, and felt that I had lost a degree of connection with them. While most of us are probably aware of social media’s ability to suck our attention away from daily tasks and damage our sense of self-worth through plugging into our desire for immediate gratification, my friendships and relationships — especially those maintained over long distances — wouldn’t be the same without them.
One issue with the social media model highlighted at the conference was its revenue-creation model — the attention economy. Social media companies sustain themselves through selling our ‘attention’, or time spent online, to advertisers. While Facebook may claim its broader aim of human connection, at an operational, or survival level, it is about the human consumption of adverts. Core to success in the attention economy is Aza Raskin’s infinite scroll, which maintains our attention for longer by taking away the ‘stop-signal’ such as reaching the end of a web-page. He explained: our mind can only think of so many things at once. While chatting with friends over a glass of wine, we tend to drink fairly absent-mindedly until it’s empty, at which point we consider — would I like another one? But if the wineglass kept refilling itself, we’d keep drinking…and so we keep scrolling.
Tech: an expression of human systems
In this way, among others, social media taps into the weaknesses in our human psychology, causing (or at least encouraging) the wastage of millions of human hours in absent-minded scrolling. The purpose: making money. But this doesn’t make the infinite scroll a problem in itself. Rather, as various speakers highlighted, it shows that the focus of the debate on the ethics of technology needs to move beyond the technological product to its producers. As highlighted by Jacob Metcalf, tech/AI researcher and consultant, technology is not ‘ethical’ or ‘unethical’. Rather, it is an expression of its producer’s own moral and social compass. Therefore, ethics accrue to human systems, not to technologies, meaning that technology is simply an operationalisation of the existing human structure’s model of fairness. Take nuclear energy: it, nor its inventor, were ‘ethical’ or ‘unethical’: the technology has both the potential to kill in huge numbers, and to help address climate change. It is up to the producer — and society’s checks and balances perhaps, to decide upon and enact its legitimate use. Suggesting that a certain technology has its own ethical system is dangerous, therefore, as it absolves its creator of responsibility.
Returning to the example of social media: social media is simply an expression of the neoliberal economic system in which we live, in which profit — rather than human well-being — is at the core. The negative impact of social media could perhaps be compared to the collateral damage caused by a drone strike: while not intentional, it is an accepted reality. As such, the deployment of technology for a certain task, and indifference to its negative consequences, represents our current ethical and practical priorities.
The technology behind targeted adverts also supports this. My experience has been somewhat positive — I’ve discovered some of my favourite businesses as a result, including a money-saving FinTech app, ethical clothing and food brands, and impact investing conferences. Yet David Shenk, AI expert and author of The Genius in All of Us, highlighted its darker side. On his way to the conference, on the BART, he’d been scrolling social media, coming across adverts promoting free online courses on Machine Learning. Looking across at a fellow passenger, he noticed that this Afro-American man from a noticeably lower socio-economic background was being subjected to McDonald’s and other fast-food chain adverts instead. This is highly problematic, he explained, as the ‘unintended consequence’ of this bucketing of individuals into different groups for advertising purposes is the reinforcement of existing societal divides. Yet this societally damaging trend continues, precisely because of the profit-incentive over human and societal well-being in the current economic system.
Questioning the ‘intended’ consequences
While rarely explicitly stated, various speakers challenged the very intention of the deployment of technology — to support, and create winners in, the global economic system. Indeed, Aza Raskin argued, ‘Every piece of code we write is inherently political…we can’t call them unintended consequences, as there is the systematic need to appreciate the tilt we are causing to society.’ As such, he implied the need to balance the creation of code for profitable business with a responsible appreciation of, and treatment towards, societal well-being and stability. John Boyd, meanwhile, argued that technology, by being used in such a way as to change human individuals at a rate faster than natural selection in terms of our emotional, psychological and biological development, puts us in danger of not becoming our most authentic selves. In doing so, he implies the utmost sanctity of the ‘authentic self’ — including over the profit motive. John Powell, professor at UC Berkeley, was more explicit, arguing that there is ‘too much concern for human consumption, and too little concern for human welfare.’ At the heart of the issue, therefore, is that technology is being utilised within the context of, and in support for, the global economic framework, in which profit is of the utmost importance. Tech giants can cast away the ‘unintended’ consequences of their technology, whether they be increasing mental health issues, time wastage, or deepening of societal divides, precisely because the economic system in which we live incentivises consumption and growth above all else. I doubt whether the unintended consequences of exponential technology can be truly addressed and alleviated without a fundamental transformation of human systems and structures in which human flourishing is placed at the centre. In the words of UCOT’s founder, ‘The economic system is driven by the ‘I’ table. What happened to the ‘we’ table?’
The incentives of technology are fundamentally misplaced: it is time for the human economy.
See more about UCOT here.