The AI summer keeps going... Meredith Whitaker says worry about corporations, not machine consciousness. And David Wood calls for "singularity activism"

The AI revolution rumbles on. And we’re still interested in the extent to which our agenda of community power and self-determination can relate to these vast automations—so easily cast as being nearly out of our control, and even potentially dominating us.

The so-called “godfather of AI”, Geoffrey Hinton, raised headlines and eyebrows this week. Hinton left his prime position at Google, in order to take up the cause of warning humanity about the possibility that AI might develop in ways that we humans find difficult to control.

“In trying to think about how the brain could implement the algorithm behind all these models, I decided that maybe it can’t – and maybe these big models are actually much better than the brain,” he says.

A “biological intelligence” such as ours, he says, has advantages. It runs at low power, “just 30 watts, even when you’re thinking”, and “every brain is a bit different”. That means we learn by mimicking others. But that approach is “very inefficient” in terms of information transfer.

Digital intelligences, by contrast, have an enormous advantage: it’s trivial to share information between multiple copies. “You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us.”

Once he accepted that we were building intelligences with the potential to outthink humanity, the more alarming conclusions followed. “I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. And I don’t know any examples of more intelligent things being controlled by less intelligent things – at least, not since Biden got elected.

“You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog. And it’s going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice.”

Much debate about these comments. But we were particularly struck by this Fast Company interview with Meredith Whittaker, “a prominent AI researcher who was pushed out of Google in 2019 in part for organizing employees against the company’s deal with the Pentagon to build machine vision technology for military drones”. Meredith’s take on Hinton’s alarmism is usefully contrary:

FC: So, we shouldn’t be worried that AI will come to life and wipe out humanity?

MW: I don’t think there’s any evidence that large machine learning models—that rely on huge amounts of surveillance data and the concentrated computational infrastructure that only a handful of corporations control—have the spark of consciousness. 

We can still unplug the servers, the data centers can flood as the climate encroaches, we can run out of the water to cool the data centers, the surveillance pipelines can melt as the climate becomes more erratic and less hospitable. 

I think we need to dig into what is happening here, which is that, when faced with a system that presents itself as a listening, eager interlocutor that’s hearing us and responding to us, that we seem to fall into a kind of trance in relation to these systems, and almost counterfactually engage in some kind of wish fulfillment: thinking that they’re human, and there’s someone there listening to us.

It’s like when you’re a kid, and you’re telling ghost stories, something with a lot of emotional weight, and suddenly everybody is terrified and reacting to it. And it becomes hard to disbelieve.

FC: What you said just now—the idea that we fall into a kind of trance—what I’m hearing you say is that’s distracting us from actual threats like climate change or harms to marginalized people.

MW: Yeah, I think it’s distracting us from what’s real on the ground and much harder to solve than war-game hypotheticals about a thing that is largely kind of made up. And particularly, it’s distracting us from the fact that these are technologies controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve.

And if we follow these corporations’ interests, we have a pretty good sense of who will use it, how it will be used, and where we can resist to prevent the actual harms that are occurring today and likely to occur. 

More here. Whitaker joins Timnit Gebru and Meg Mitchell. These are ex-Google employees - women and PoC - who want to render the ultimate “existential” threat of AI as minimal, and AI’s present threat -the racism and sexism it reproduces, its use as an instrument of extraction, exclusion and exploitation - as urgent, and indeed regulable.

We endorse this. But we still believe that a rich enough conception of humans-in-community, expressing their fullest nature, can find a way to amplify our powers—by using the automation of AI’s to liberate us from routine and machine-like labour. Can’t we ensure both outcomes?

We take as our guide in these matters, London Futurists’ David Wood, whose latest presentation is embedded below.

His blurb at the video goes:

New AI systems released in 2023 demonstrate remarkable properties that have taken most observers by surprise. The potential both for positive AI outcomes and negative AI outcomes has been accelerated. This leads to five responses:

1.) "Yawn" - AI has been overhyped before, and is being overhyped again now. Let's keep our attention on more tangible issues

2.) "Full speed ahead with more capabilities" - Let's get to the wonderful positive outcomes of AI as soon as possible, sidestepping those small-minded would-be regulators who would stifle all the innovation out of the industry

3.) "Accelerate sensible safety research" - The risks from AI are real, and more resources need to be channelled in that direction as soon as possible

4.) "Pause huge AI experiments" - As urged by an open letter from the Future of Life Institute, work on training larger language systems needs to pause for at least six months whilst better frameworks and guiderails for future research and development are explored

5.) "Enforce a halt on AI capabilities research" - a six-month pause in selected AI research is insufficient. Without realising it, we are rushing headlong with a high probability of catastrophe. More drastic enforcement measures are needed.

That was the context for a presentation given to London Futurists on 29th April 2023 by David Wood, Chair of London Futurists. The presentation advocated the fourth of the above responses:

*) Why such a serious measure (the "pause") is needed

*) Six levels of understanding AI

*) How AGI could be reached in just 3-5 years

*) Four potential catastrophic error modes involving advanced AI

*) Response to six common criticisms to the "pause" proposal

*) Practical steps that can be taken

*) Looking forward to a flourishing future for humanity in the envisioned "AI summer".

As the video makes clear, the presenter's goal is to ensure that humanity will have a wonderful, positive Singularity, rather than any of the negative catastrophes that are all-too-possible.

More here.