The AI merry-go-round continues... Should we ban it from "counterfeiting people"? Let it unravel capitalism? Build "resilient small tech" instead?

We’re doing our best to pick the gems out of the avalanche of current writing about artificial intelligence [our archive over six years is here]. We’re trying to connect it to constructive futures and community powers, as best we can. This means running both embracing and defensive viewpoints.

For example: what are the implications of AIs’ ever-more-plausible simulations of humans in reality, visually, textually or in new ways to come? Philosopher Daniel Dennett is adamant on the need to ban AI’s making of “counterfeit people” in The Atlantic:

The philosopher and historian Yuval Noah Harari, writing in The Economist in April, ended his timely warning about AI’s imminent threat to human civilization with these words:

“This text has been generated by a human. Or has it?”

It will soon be next to impossible to tell. And even if (for the time being) we are able to teach one another reliable methods of exposing counterfeit people, the cost of such deepfakes to human trust will be enormous. How will you respond to having your friends and family probe you with gotcha questions every time you try to converse with them online?

Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us.

Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect.

The key design innovation in the technology that makes losing control of these systems a real possibility is that, unlike nuclear bombs, these weapons can reproduce. Evolution is not restricted to living organisms, as Richard Dawkins demonstrated in 1976 in The Selfish Gene.

Counterfeit people are already beginning to manipulate us into midwiving their progeny. They will learn from one another, and those that are the smartest, the fittest, will not just survive; they will multiply. The population explosion of brooms in The Sorcerer’s Apprentice has begun, and we had better hope there is a non-magical way of shutting it down.

More here. Dennett’s proposed solution is the kind of indelible watermarking that we’ve globally agreed to with digital currency, but applied to fake versions of Granny (or Trump).

But for animals that have been painting walls with simulations of reality for tens of thousands of years, will this freeze on mimesis freeze something essentially creative in us? Might not the solution be more unsimulatable face-to-face encounters, a convivial democracy? Perhaps enabled by the increases in free time and basic security that a progressive deployment of AI to routine work would bring us? IE, we need a better political economy, not a global digital policing job?

In an essay in Worldcrunch, the leftist thinker Slavoj Zizek goes all the way there, and imagines that capitalists exulting over AI are in fact hastening their own demise:

The future waiting on the horizon is nothing less than the end of capitalism as we know it: the prospect of a self-reproducing AI system that requires less and less human involvement – the explosion of automated trade on the stock exchange is the first step in this direction…

Many lonely (and also not so lonely) people spend their evenings having long conversations with chatbots, talking about new films and books, debating political and ideological questions, and so on. It’s not surprising that they find these exchanges relaxing and satisfying: to repeat an old joke of mine, what they get from this exchange is an AI version of decaffeinated coffee or a sugar-free drink – a neighbour with no hidden motives, an Other who perfectly meets their needs…

it is easy to see that the attempts to “take stock” of the threats posed by AI will tend to repeat the old paradox of forbidding the impossible: a true post-human AI is impossible, therefore we must forbid anyone from developing one… To find a path through this chaos, we should look to Lenin’s much-quoted question: Freedom for whom, to do what? In what way were we free until now? Were we not being controlled to a far greater extent than we realized?

Instead of simply complaining about the threat to our freedom and intrinsic value, we should also consider what freedom means and how it may change. As long as we refuse to do that, we will behave like hysterics, who (according to French psychoanalyst Jacques Lacan) seek a master to rule over them. Is that not the secret hope that recent technologies awaken within us?

The post-humanist Ray Kurzweil predicts that the exponential growth of the capabilities of digital machines will soon mean that we will be faced with machines that not only show all the signs of consciousness but also far surpass human intelligence.

We should not confuse this “post-human” view with the modern belief in the possibility of having total technological control over nature. What we are experiencing today is a dialectical reversal: the rallying cry of today’s “post-human” science is no longer mastery, but surprising (contingent, unplanned) emergence.

More here.

Finally, the artist Hito Steyerl evoked a tantalising image of a better AI in this month’s NLR. The basic critique rests on the old myth of the two-headed greek God Janus - one face going back into history, the other facing forwards into the future.

Generative AI raids the archive of humanity going backwards, and produces a “mean” (b0th senses of the word) approximation of it, in text and image, going forward. Hito also goes into great and harrowing details about the humans still monitoring these systems, and the economic and psychological distress this causes.

So can we imagine a better future-oriented face of the AI Janus?

In the case of machine learning, the infrastructure consists of massive, energy-hungry, top-down cloud architectures, based on cheap click labour performed by people in conflict regions, or refugees and migrants in metropolitan centres. Users are being integrated into a gigantic system of extraction and exploitation, which creates a massive carbon footprint.

To take the Janus problem seriously therefore would mean to untrain oneself from a system of multiple extortion and extraction. A first step would be to activate the other aspect of the Janus head, the one which looks forward to transitions, endings as beginnings, rather than back onto a past made of stolen data.

Why not shift the perspective to another future—a period of resilient small tech using minimum viable configurations, powered by renewable energy, which does not require theft, exploitation and monopoly regimes over digital means of production?

This would mean untraining our selves from an idea of the future dominated by some kind of digital-oligarch pyramid scheme, run on the labour of hidden microworkers, in which causal effect is replaced by rigged correlations. If one Janus head looks out towards the mean, the other cranes towards the commons.

More here.