An all-out war over truth in the digital age

Tammy Gan considers the way technology targets our beliefs, feelings, and identities, and what makes us human. What are the implications of digital platforms and emerging technologies on our humanity? How do we create flourishing futures in the digital age?

I don’t believe technology is inherently “a bad thing”. For one, “technology” is an entire phenomena that can’t be summarised nor universalised. And obviously, it’s far more nuanced than that: there are plenty of people genuinely trying to do good things within it, and no doubt, there has been and continues to be plenty of good that comes out of the wonders of technological development, without which—presenting a non-exhaustive list—cross-border solidarity for all manner of global, interconnected injustices wouldn’t be as facilitated; intercultural learning and listening wouldn’t be as empowered; communication with you, in this moment, wouldn’t be as simple…

In the same breath, it’s wholly possible to appreciate these while understanding that we, as humans who have been reduced and flattened into end users, have traded, for these freedoms, other freedoms. And that this may be a trade that, the more we look at, the more we may feel that we’ve been hoodwinked at best, entirely exploited at worst. Disheartening and existential crisis inducing as this may be, nonetheless we are here, and the only way out is through.

“To what end? To what purpose? These are all entangled questions, with the idea of power and capital,” offers political strategist, writer and activist, Alnoor Ladha. This is a simple but useful starting point, as we examine emerging digital technologies in hopes of reclaiming agency.

Serving as one possible answer, digital researcher and author Carl Miller writes: “The whole way that truths are validated and opinions formed and defended has completely changed.” In short: in the digital age we are witnessing an all-out war over truth.

“We’ve begun to call all of this ‘fake news’ or ‘disinformation’,” writes Miller three years ago, when those weren’t overused and almost meaningless buzzwords, “but many of the techniques of illicit influence are far more subtle and clever than that. Far from trying to change your mind, this is a world often trying to confirm it [...] It is nothing less than the creation of synthetic ideational realities. What we see, what we think, who we are, are all being shaped from the shadows.”

So it seems we’ve traded our freedom to form our own identities, to sense-make in an increasingly complex world, and to arrive at our own truths. What Miller is pointing to isn’t about “knowledge” or “information”. He’s pointing to increased polarisation, increased conflict—it’s about what we believe, and how we get to that point. Less about what we know, and more about what we say. He adds that it’s “a way of thinking that doesn’t just conceive information as a tool of conflict, but a theatre of war itself.”

It’s: mass cancel culture, one person’s words against another, the proliferation and voicing of opinions formed fast and hard, the abuse of speech and influence by way of amplification and echo chambers and platforming; and the theatre of it all, at such a dizzying speed that whiplash is almost guaranteed—before you get a chance to keep up with one thing or even catch a breath, you’re thrusted into the next. “Really,” Miller concludes, “it is a struggle for who we are. [...] We’ve all been brought [to] a new kind of a frontline where our beliefs, even [our] feelings are now targets.”

Pause. Take a breath.

We—end users—are the resource, and we must be clear-eyed about this reality.

As data (created by us and about us) is being mined endlessly—truly, and terrifyingly, there is no end to data that can be generated—systems and the corporate entities running them are getting better and better at guessing who we are and what we do. But crucially, more than that, the freedom to form our own identities are being taken away. They’re not just getting better and better at the guessing; they’re getting better and better at the determining.

Transdisciplinary designer and software engineer Ross Eyre explains: “If you spend time online today it’s likely that your activity is being recorded [...] it is used among potentially hundreds of thousands of other ‘sessions’ to train sophisticated AI models designed to do one thing better than anything else: predict your behaviour.” To what end? To influence our browsing and clicking, and in between: our purchases and consumption of services, then our inclinations, then our desires. Eyre asks: “At what point does influence become a matter of psychological manipulation and/or control?”

To answer this question Eyre considers information asymmetry, and the fact that machine learning systems have unimaginably, inconceivably more information about us.

“While we might believe ourselves to be free agents, associating with and interacting within relatively neutral online spaces, it may be more appropriate to say that we are sitting in the living rooms of Mark Zuckerberg and Jeff Bezos. Only each time we visit, the furniture and decorations are changed to our personal liking. [...] In one sense, we are participants in a mass psychosocial experiment informed by the latest research at the intersection of artificial intelligence, big data and human psychology.”

What changed in your living room? Is it really a preference if it was predicted accurately?

Not only do these artificially intelligent, enhanced technologies endeavour to change how we prefer, they engineer other aspects of our human identity too: like how we make sense of the world. Eeyre raises the example of the modern search engine—of which most of us equate to Google, which rose to dominance in the late 1990s and cemented its place ever since, and notes that its function as a search engine is to “employ ways to ‘reduce’ the totality of information available online to a more digestible size”, without, hopefully, imposing any bias. But with more data about what we are searching for to understand the world, Google is more able to predict, model and readily offers up more feedback loops for us to feed into, by way of direct answers.

Those who have grown up on the Internet will have noticed how much Google has changed in the short time its been alive and dominant: now questions are answered by Google without even having to click on the links. Non-controversial questions and answers aren’t really a problem, but our complex world and messy shared realities, filled with layers of nuance and sociopolitical histories, can’t possibly be reduced to “direct answers”. As Eeyre asks: “In practice, of course, these systems could be designed to limit bias [...] But how and by whom?” And, ultimately, “they touch upon the nature of subjectivity, knowledge and ethics – indeed, of the experience of being human. At some point, we will also have to decide whether it is a good idea to let our thinking machines navigate such territory on our behalf.”

It’s not just truths—information, facts, thoughts, behaviours—that are being blurred, it’s the way we arrive at truths, our sense-making, human (though of course, not limited to our species alone, I must add) processes that are being wrested out of our hands too.

I feel it, and I know many of us do: the fear that the richly textured ways in which I feel my way into this world, are being depleted. With the latest technology that has emerged into our mainstream consciousness, ChatGPT, the concerns around “direct answers” seem to only be magnified. ChatGPT—a seemingly too-good-to-be-true Frankenstein of Google’s direct answers and Siri, and whatever previous iterations of those two can before it—is only going to get better, and more human-like, whatever that means, and whether we like it or not.

Writer Mariana Lin, who for many years sat down daily to write script lines for AI characters (like Siri), wondered, five years ago, “if meandering, gentle, odd human-to-human conversations will fall by the wayside as transactional human-to-machine conversations advance. As we continue to interact with technological personalities, will these types of conversations rewire the way our minds hold conversation and eventually shape the way we speak with each other?” Noting how human communication has undergone a binary flattening, fiber-optic reducing process, being everyday filtered through devices, she concluded: “I don’t want AI to reduce speech to function, to drive turn-by-turn dialogue doggedly toward a specific destination in the geography of our minds.”

It’s eerie how her words have only become more relevant. But warnings like these have been made for years now, from many people, across the entire spectrum of feelings about technological advancement in a profit-driven world. I don’t wish to merely paint a nightmarish, dystopic picture of a digitally augmented future, nor end off on such a note—it would be rather trite. I wish not to focus on technology itself, but instead on humanity, which is itself an admittedly loaded word. But in the implications of technology for our humanity, or more accurately, humanness, is far more interesting than the technology itself. So here are two possibly conflicting thoughts I’m sitting with.

The first is that, as Carl Miller writes in The Death of the Gods, algorithms have changed, “from Really Simple to Ridiculously Complicated.” he says that they’re “able, really, to handle an unfathomably complex world better than a human can.” Taken with the context it came from, Miller is right—machines are able to handle a kind of complexity far beyond a human can. But at the same time—and this is the second thought—as novelist Manda Scott says: “If we could predict it, it’s not emergence from the complex system.” Taken together, it’s algorithms, and machine learning (as processes and entities that rely on predicting from data points) versus us, humans (as beings that can’t possibly ever be truly compressed into and represented through the binary code). The complexity that machines and humans are processing are both remarkable, but they are entirely different.

The human capacity to change our minds, to reject and make sense of, to feel and prefer and desire, to discern and hold (our) truths from nuanced scenarios, from real life, gives rise to a different kind of emergence that will ever come out of computer models and artificial intelligence.

In the ongoing, truly contemporary struggle to reclaim agency in this digital age, it would be a disservice to ourselves to believe that anything modelled by a computer—a tool that was constructed—could ever mirror, or come close to, the complex reality in our world, that we share.

And this isn’t an argument for human exceptionalism: here I defer to ecologist and philosopher David Abram’s book, Becoming Animal, in which he makes the case for human consciousness, human sensemaking, everything that we think makes us human—as things that are actually emerging from the earth we humans are embedded within, the earth that we share.

Abram writes: “The belief in a thoroughly objective comprehension of nature, our aspiration to a clear and complete understanding of how the world works, is precisely the belief in an entirely flat world seen from above, a world without depth, a nature that we are not part of but that we stare at from outside—like a disembodied mind, or like a person gazing at a computer model. […] By renouncing our animal embodiment—by pretending to be disembodied minds looking at nature without being situated within it—we dispel the tricksterlike magic of the world.”

Somewhere in between the lines of what Abram says lies (at least one of) the key to reclaiming our humanity in the digital age. While I don’t have a definitive concluding answer to the questions and dilemmas raised, I will say this: as much as Silicon Valley and tech-utopionists try, I don’t think they will ever be able to ever take away what makes us human. What makes us human is what makes us animal: what makes us part of the animate earth—all of it is embedded in the earth, beyond technological capture. And I’m interested in the technologies that seek not to mimic the human nor mirror our complex realities, not to flatten the human nor manipulate it into something predictable, but that emerge and co-evolve like everything else, every person, every animal, every more-than-human does, in this earth.

Are there technologies that converse with the earth, that are willing to surrender a desire to model and to control, that are more like Donna Haraway’s cyborgs instead?

Again, there are no answers. But in the asking, perhaps we’ll find ourselves on a path, instead of the doomsday and suffering from monocultures and the nihilistic singularity, towards flourishing futures that we create together.

Contributors

Tammy Gan

Tammy (she/her) leads on content and storytelling at advaya.

Learn more