What can Islam and Buddhism teach us about the ethics of regulating artificial intelligence? Answer: a lot

We always keep an eye on the transformative implications of artificial intelligence - either as an assist to humans in how we address our climate and social crises, or as an advancing replacement for the more routine and administrative aspects of human life.

Yet a lot of the ethics around how it operates, are still to do with how humans behave around it - the “Garbage In, Garbage Out” law still stands. (See this alarming story from MIT Technology Review about an AI that saw a cropped picture of the American socialist Democrat representative Alexandria Ocasio-Cortez—and autocompleted it by attaching her to a bikini’d body…) If we’re looking for ethical fibre to draw on in the design of these computations, before they’re let loose in the world, maybe the great religions could be a help?

So we were fascinated to find these two articles which come at AI ethics from a Buddhism and an Islamic perspective.

The first is also from MIT Technology Review, written by Soraj Hongladarom, a professor of philosophy at the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok, Thailand. Hongladarom sets the context clearly:

Many groups have discussed and proposed ethical guidelines for how AI should be developed or deployed: IEEE, a global professional organization for engineers, has issued a 280-page document on the subject (to which I contributed), and the European Union has published its own framework. The AI Ethics Guidelines Global Inventory has compiled more than 160 such guidelines from around the world.

Unfortunately, most of these guidelines are developed by groups or organizations concentrated in North America and Europe: a survey published by social scientist Anna Jobin and her colleagues found 21 in the US, 19 in the EU, 13 in the UK, four in Japan, and one each from the United Arab Emirates, India, Singapore, and South Korea.

Guidelines reflect the values of the people who issue them. That most AI ethics guidelines are being written in Western countries means that the field is dominated by Western values such as respect for autonomy and the rights of individuals, especially since the few guidelines issued in other countries mostly reflect those in the West.

So what could Buddhism bring to the ethical design of AI?

Buddhism proposes a way of thinking about ethics based on the assumption that all sentient beings want to avoid pain. Thus, the Buddha teaches that an action is good if it leads to freedom from suffering.

The implication of this teaching for artificial intelligence is that any ethical use of AI must strive to decrease pain and suffering. In other words, for example, facial recognition technology should be used only if it can be shown to reduce suffering or promote well-being. Moreover, the goal should be to reduce suffering for everyone—not just those who directly interact with AI.

We can of course interpret this goal broadly to include fixing a system or process that’s unsatisfactory, or changing any situation for the better. Using technology to discriminate against people, or to surveil and repress them, would clearly be unethical. When there are gray areas or the nature of the impact is unclear, the burden of proof would be with those seeking to show that a particular application of AI does not cause harm.

A Buddhist-inspired AI ethics would also understand that living by these principles requires self-cultivation. This means that those who are involved with AI should continuously train themselves to get closer to the goal of totally eliminating suffering. Attaining the goal is not so important; what is important is that they undertake the practice to attain it. It’s the practice that counts.

Designers and programmers should practice by recognizing this goal and laying out specific steps their work would take in order for their product to embody the ideal. That is, the AI they produce must be aimed at helping the public to eliminate suffering and promote well-being.  

Another recent article from Wired UK magazine, “Muslim scholars are working to reconcile Islam and AI”, also emphasises the elimination of suffering as a consequence of AI. They interview Amana Raquib, a professor of philosophy and ethics at Karachi’s Institute of Business Administration, on the question of self-driving, AI-controlled robot cars:

When I ask Qadir how an Islamically minded automobile might react if forced to choose between running over a pedestrian or crashing and killing the driver – a popular thought experiment in AI circles – he says that is the wrong question to ask.

He believes AI should be designed to be non-maleficent – it should, first of all, do no harm. This yardstick in Islam is called falah, or spiritual success according to Quranic injunction, which the professors say is sharply distinct from the west’s profit-driven approach.

Any new technology should be judged on this basis – even if, Qadir argues, this entails delays in developing ideas, or even shelving them. “Is it really useful to have a device that presents you only harmful options to choose from?” he asks. 

Not everyone agrees with this stance. Dr Muhammad Aurangzeb Ahmad, an associate professor at the University of Washington researching the applicability of AI in healthcare, thinks that implementing any algorithm implies trade-offs.

“Both AI and a human doctor are working with finite resources: a hospital will have a budget, which sadly means not everyone’s life can be saved,” he says. “The only difference is that a doctor will work with localised, maybe imperfect information, and AI [works] across a huge data-set – ideally allowing for more optimal risk-assessment.” 

Falah, for him, is less a test of which AI to permit, but a question of what outcomes it should deliver. “I’d agree a falah-optimising algorithm would maximise saving lives over profit, but that doesn’t mean resource constraints disappear. The real ethical question then becomes, who does it save?” says Ahmad.

…The same principle applies for driverless cars – balancing harms, Ahmad believes, is a process intrinsic to Islamic thought. Indeed, the Islamic jurist Al-Ghazali posed a predecessor to the trolley problem as early as the 11th century, when he asked his students whether one might be justified in throwing some passengers off a sinking ship, if it meant saving the life of the majority. 

Trolley problems, Ahmad notes, might even understate the case for autonomous cars, because they fail to take into account the status quo – human drivers. “Say we have 100,000 road fatalities a year due to bad drivers – we might be able to halve that with AI vehicles. Perhaps not having driverless cars is deeply immoral.”

More here. This discussion is based on the winners of Facebook’s Ethics in AI Research Initiative for the Asia Pacific awards.