Here's some "key open questions" for artificial general intelligence. And try to dream of AI as a prosthesis for our creativity - not a nightmare that just replaces us

For more visit here

Keeping up with the AI juggernaut as best we can, trying to both steer and slow down with as much rich context as we can find…

First, an expectedly worthwhile crowd-sourcing initiative from our friend the London Futurist, David Wood, who is asking “what are the key questions in the transition to AGI?”

The distinction Artificial General Intelligence - as opposed to AI - is worth holding in mind. AGI is when these thinking machines attain a level where they can replace humans in cognitive tasks at the same, or a more advanced state. This is well below the “superintelligent” AI that have been summoned up by recent calls for a moratorium on development.

As David writes here, the “general” in AGI means:

  • With its abilities applying in every domain in which humans are interested (including the arts, the sciences, the environment, political interactions, military interactions, and knowledge and predictions about other humans)

  • With its abilities also meaning that the AGI, unaided, would be at least as good as humans in working out ways in which to improve its own capabilities further.

Note that this definition makes no assumptions about whether the AGI is conscious or sentient, or whether it “actually understands” what it is observing. (A chess-playing AI can out-manoeuvre humans at the game even though it likely lacks the same kind of understanding of chess that humans possess.) Nor is there any assumption that the AGI will, inevitably, choose its own goals.

David’s suggestions are above (and they are glossed here), but he’s inviting you to respond and formulate at his 8-page Google form.

Our second item comes from the German-Hong Kong philosopher of technology (and featured here regularly), Yuk Hui. In this e-Flux essay, Hui passionately wants us to stop thinking about AI as a replacer of jobs, or our interactive playthings, but as prostheses - tools that extend our faculties, to develop us:

If high school physics was more popular, we would have a more nuanced concept of acceleration, because acceleration doesn’t mean an increase in speed but rather an increase in velocity. Instead of elaborating a vision of the future in which artificial intelligence serves a prosthetic function, the dominant discourse treats it merely as challenging human intelligence and replacing intellectual labor.

Today’s humans fail to dream. If the dream of flight led to the invention of the airplane, now we have intensifying nightmares of machines. Ultimately, both techno-optimism (in the form of transhumanism) and cultural pessimism meet in their projection of an apocalyptic end.

Human creativity must take a radically different direction and elevate human-machine relations above the economic theory of replacement and the fantasies of interactivity. It must move towards an existential analysis.

The prosthetic nature of technology must be affirmed beyond its functionality, for since the beginning of humanity, access to truth has always depended on the invention and use of tools. This fact remains invisible to many, which makes the conflict between machine evolution and human existence seem to originate from an ideology deeply rooted in culture…

Can the human escape this positive feedback loop of self-fulfilling prophecy so deeply rooted in contemporary culture? In 1971 Gregory Bateson described a feedback loop that traps alcoholics: one glass of beer won’t kill me; okay, I’ve already started, a second one should be fine; well, two already, why not three?

An alcoholic, if they’re lucky, might get out of this positive feedback loop by “hitting bottom”—by surviving a fatal disease or a car accident, for example. Those lucky survivors then develop an intimacy with the divine.

Can humans, the modern alcoholics, with all their collective intelligence and creativity, escape this fate of hitting bottom? In other words, can the human take a radical turn and push creativity in a different direction?

Isn’t such an opportunity provided precisely by today’s intelligent machines? As prostheses instead of rote pattern-followers, machines can liberate the human from repetition and assist us in realizing human potential.

How to acquire this transformative capacity is essentially our concern today, not the debate over whether a machine can think, which is just an expression of existential crisis and transcendental illusion. Perhaps some new premises concerning human-machine relations can liberate our imagination. Here are three (though certainly more can be added):

1) Instead of suspending the development of AI, suspend the anthropomorphic stereotyping of machines and develop an adequate culture of prosthesis. Technology should be used to realize its user’s potential (here we will have to enter a dialogue with Amartya Sen’s capability theory) instead of being their competitor or reducing them to patterns of consumption.

2) Instead of mystifying machines and humanity, understand our current technical reality and its relation to diverse human realities, so that this technical reality can be integrated with them to maintain and reproduce biodiversity, noodiversity, and technodiversity.

3) Instead of repeating the apocalyptic view of history (a view expressed, in its most secular form, in Kojève and Fukuyama’s end of history), liberate reason from its fateful path towards an apocalyptic end. This liberation will open a field that allows us to experiment with ethical ways of living with machines and other nonhumans.

No invention arrives without constraints and problematics. Though these constraints are more conceptual than technical, ignoring the conceptual is precisely what allows evil to grow, as an outcome of a perversion in which form overtakes ground.

More here.