Some people might be surprised to learn that I have just published a book entitled Technology is Not the Problem. After all, I regularly point out problems with technology in our asymmetric world, where a few companies or organisations collect data about us all, aggregate and analyse that data and use it for their own purposes.
The fastest growing area of the tech market, pre-pandemic, was smart speakers, devices such as Amazon’s Alexa that we install in our homes to listen out for our requests, play us music, or answer our questions, like attentive servants. The timesaving, effort-reducing qualities of this all-pervasive technology led some people to talk about ‘Digital Athens.’ Like the citizens of that first democracy, freed to practise politics and philosophy by the labour of slaves, they argue that we will be free to dedicate ourselves to higher things while AI takes on the chores of everyday life. But I think our world of ubiquitous technology is more like digital Wolf Hall. Your house is full of servants, but who are they really working for? They fetch what you command, but their true loyalty is to other masters, and they can filter what comes to you. They can even nudge you into doing things without your conscious awareness.

Technology affects where we focus our time and attention. The ease of following where the algorithms lead places us in the passenger seats of our own lives. Human judgment, responsibility, and initiative are increasingly being handed over to automated systems. Even when the systems are supposedly designed for our own good, I am deeply concerned that what it means to be an adult, a citizen, a person, is being eroded by our willingness to hand over control to technology.
Technology is increasingly personalising the world we live in. The results can sometimes feel uncanny. You may be one of the few people who heard a television advert address them by name. “Welcome, Rebecca, to the Unlimited,” said the presenter of Skinny Mobile’s advertisement on New Zealand’s TVNZ—but only to viewers called Rebecca. Streaming customers with one of the 200 most popular names all heard the presenter say their own name. The UK’s Channel 4 has also experimented with advertising so personalised it addresses you by name. It didn’t go down well. We don’t like it when the attentive algorithms get too personal. But we don’t like it when our digital world isn’t personalised enough, either: when we get adverts for the thing we’ve already bought or are offered merchandise for the wrong sports team. That ambivalence towards personalisation pervades our relationship with the technology that profiles and targets us, by collecting data and building profiles of each of us.
We conduct so much of our lives either through or accompanied by digital devices that there is no shortage of raw material. So, how do algorithms turn all that information into useful insights about what makes each of us unique? Can Big Data really know me better than I know myself? To find out, I tried out an online psychological test. The University of Cambridge Psychometrics Centre asked me to rate how well certain statements applied to me on a five-point scale from ‘Strongly Disagree’ to ‘Strongly Agree’: “I get excited by new ideas”; “I hold a grudge”; “I am filled with doubts about things”; “I believe that I am better than others”; “You are walking in the desert, and you find a tortoise…” OK, not that last one, that was the “Are you human?” test in Blade Runner.
As I tried to answer the questions honestly, I realised that what I was feeling about taking the test wasn’t just intellectual curiosity; I was experiencing a small thrill at the idea that I was going to learn more about myself. Clearly, I am an insufferable narcissist who never tires of hearing more about how unique I am. I got a near-erotic hit of pleasure from reading that I am 79 percent open to new experiences, 34 percent conscientious, 66 percent extraverted, 50 percent agreeable (workmates may disagree) and only 42 percent neurotic. Though I’m now worrying about whether I should be more conscientious, so make that 43 percent. I was also struck by how absurd it was to revel in my own uniqueness by letting a computer assign me a numerical value along just five measures (the Big Five psychological traits). Giving exact percentages makes it sound very scientific: I don’t just love new and interesting things—I am 79 percent open to novelty. But nobody can be that definite about my character from just a hundred questions. Being only 34 percent conscientious, and hence 66 percent impulsive and spontaneous, I bashed through the test, giving answers that were sometimes contradictory.
How Do We Protect Ourselves from Billion-Dollar Boy-Men?
The hubris and dilettantism of corporate titans is an old story. But the risk has been compounded by digital technology’s hugely scalable nature.

The University of Cambridge Psychometrics Centre hasn’t told me anything about myself that any of my friends couldn’t have told me already. Will I do that kind of online test again anyway? You bet I will. I will also do those quizzes that promise to reveal which zoo animal, eighteenth-century philosopher, or Disney villain I am (Ursula from The Little Mermaid, thanks for asking). I want to be recognised, if only by a machine, for the special person I feel myself to be.
After I had done their psychological test, I gave the Cambridge Psychometrics Centre access to my Twitter account and got different results. According to their analysis of my social media activity, I am less conscientious, less agreeable, and positively introverted. I am also male (I’m not) and aged around thirty (I’m not), which does explain the adverts I get for watches and beard care products. Predicting the Big Five personality traits from social media activity is not as accurate as you may have heard. If you have ever panicked about voters being tricked by the evil genius of psychometric micro-targeting, you can relax a little: those techniques are nowhere near as powerful as companies like Cambridge Analytica have claimed.

The Cambridge Psychometrics Centre was not involved in political campaigns that tried to influence voter behaviour. But the original research on which that later work was based was done at the Centre in 2012 by Michal Kosinski, David Stillwell, and Thore Graepel. According to their 2013 paper, “Private traits and attributes are predictable from digital records of human behavior,” not just the Big Five personality traits, but sexual orientation, intelligence, and more can all be predicted from what people ‘like’ on Facebook. “People may choose not to reveal certain pieces of information about their lives, such as their sexual orientation or age,” write the researchers, “and yet this information might be predicted in a statistical sense from other aspects of their lives that they do reveal.” For example, liking curly fries on Facebook was found to predict higher intelligence.
But let’s examine what ‘predict’ means here. In everyday language, predicting something means you think it will happen. You see a person standing by a puddle, and you predict that they will be soaked when the approaching bus drives through the puddle (based on your previous observations of water and buses). Or, using a dating app, you might predict that, in person, your date will probably be shorter, older, fatter, and poorer than their profile claimed. You cynic. That prediction is based on your previous experience of online dating, or what you’ve read about the most common lies people tell about themselves. Let’s hope that this person is the exception that makes dating apps worthwhile.
Kosinski and his co-researchers are using the statistical sense of ‘predict,’ which is closer to anticipating how an internet date will turn out than the horrible inevitability of getting soaked by a passing bus. There’s no guarantee that any one person they tag as intelligent has that quality in real life, but they think it is more likely, statistically, compared to a random person who has expressed no love for curly fries. Not much more likely, but better than pure chance.
When statisticians talk about one quality predicting another, they are comparing groups and saying how much more likely one group is to include what they’re looking for. Suppose you’re looking for people with beards, to advertise your beard care products. You can save yourself a lot of money and time by omitting women and children from your target audience because hardly any of them have beards. Not all men have beards either, but you have improved your odds by targeting them exclusively. Neither data nor the computer programs that analyse it can know anybody in the way one person knows another person. All the programs can do is compare your data with previous data, and assign a probability that some other data will also be observed. That’s why the Cambridge Psychometric Centre’s algorithm places me well into the masculine side of the gender axis, and thinks I am around the age of 30—because other people who previously participated in their research, whose tweets resemble mine, were male and aged around thirty. Or said they were.
Being assigning a character portrait like this is almost the opposite of being intimately known and understood by another person. Humans have theory of mind. We use our imaginations to understand what it might be like to be someone else, and what we would do if we were them. The Psychometric Centre, by contrast, is engaged in a mass sorting exercise, looking for statistical relationships in large populations and then applying the results to each of us.
Podcast #197: From VisiCalc to Global Cloud Computing: Lessons from a Lifelong Love Affair with Digital Technology
Quillette podcast host Jonathan Kay talks to veteran Microsoft program manager, R&D coordinator, tablet-computing pioneer, and workplace visionary David Jones about the joy of spreadsheets, the art of “schedule chicken,” Bill Gates, Steve Ballmer, and “Clippy,” everyone’s least favourite digital office assistant.

On average, I am a thirty-year-old man. That mathematical model of a person, constructed by automated systems, is my avatar in this personalised world. Technology steers my life along tracks chosen for it, not me. Most options are removed without my ever knowing they existed, whether that’s the dating profile of the man who might otherwise have become the father of my children, or the job that might have been my first step towards leading a technology company.
Instead of me choosing the direction of my life and seeking out the opportunities, people, or ideas that I think will help me on my way, I get to choose from a limited menu designed for my digital double. Unlike me, that persona cannot think for itself or decide to do anything I haven’t done before. There is no “who” at the centre of my personalised world, only a “what.” The first casualty of personalisation is the person.
But this proxy personalisation wouldn’t work at all if I didn’t play my part. The more I looked into how technology creates this world of digital profiles and algorithmic targeting, the more I asked myself a new question: Why are we living in this personalised world?
Ten years ago, I would have said that most people have no idea how intrusive the technology is, and that, if they knew, they would opt out. Today, most people have at least a general idea that their online experience is shaped by data gathered from their own activity. Nevertheless, however much we complain about the creepiness, and adjust our device settings to block ads and cookies, we keep going back for more. We embrace this personalised world because it meets human needs that are emotional, psychological, even existential. We need to be recognised. We need to feel that we are seen by others as our authentic selves.
Identity is the dominant way we understand ourselves in the world today. It seems to meet our needs, both to be recognised for the unique person we are, and to be recognised by others as belonging to them, and with them. Today, we are encouraged to play roles that feel authentic to us, bringing to a public stage elements of our personality that would once have been private, if not secret. Singer Sam Smith expressed this very eloquently on social media in 2019:
Today is a good day so here goes. I’ve decided I am changing my pronouns to THEY/THEM ❤ after a lifetime of being at war with my gender I’ve decided to embrace myself for who I am, inside and out. … I’ve been very nervous about announcing this because I care too much about what people think but fuck it! I understand there will be many mistakes and mis gendering but all I ask is you please please try. I hope you can see me like I see myself now. Thank you.
Today is a good day so here goes. I’ve decided I am changing my pronouns to THEY/THEM ❤ after a lifetime of being at war with my gender I’ve decided to embrace myself for who I am, inside and out… pic.twitter.com/IVoLTYbAWd
— SAM SMITH (@samsmith) September 13, 2019
As our freedom to choose how to live increases, our freedom to explore and express who we are expands with it. What we are—which social categories we fit into—is less of a barrier than ever before, so we might expect that question to matter less and less. Instead, identity has grown into a powerful but contradictory idea: it is both a unique, inner kernel and an outward projection; both an essential, defining intersection of characteristics and a performed role for which we write our own script.
We can be whoever we want, but who we are is now the most important thing, from which everything else must follow. The problem is that our sense of self is fragile, driving us to want affirmation and reassurance from others. Why else would people—including me, obviously—care so much how strangers respond to them on social media? The problem is not technology, but our own obsession with how others see us, and our insatiable need to be reassured that we are the person we want others to see. Technology offers a constant menu of small choices and small social rewards, tiny affirmations that we are recognised, designed to keep us coming back for more. Our screens are more seductive than the pool into which Narcissus gazed at his own reflection in the Greek myth. We don’t just get our online persona reflected back to us, in all its edited perfection. We get instant assurance, in the form of likes, replies, and personalised feeds, that others also find our reflection irresistible.
Instead of pointing the finger at technology, I argue that the problem is you, and me, and all of us living in this digital hall of mirrors. We don’t have a shrinking sense of who we are because of personalising technology. We have personalising technology because of our shrinking sense of who we are. How to turn our energy away from the digital mirror of Narcissus and out to the wider world is the challenge my book tries to meet. Although it doesn’t address you by name, it is addressed to you as a person. Not just the person you are now, but the person you don’t yet imagine you can become.
Adapted from Technology is Not the Problem, by Timandra Harkness. Copyright © 2024 by Timandra Harkness. Reprinted by permission of Harper Collins.