Stanford has conducted the world's first "global deliberative poll"--on how to regulate bullying and harassment in VR and the metaverse

We are always alive to moments of democratic innovation (see archive), and this is certainly one - Stanford’s Center on Democracy, Development and the Rule of Law are announcing the world’s first “global deliberative poll”.

Commissioned by Meta (or Facebook as Wass) to explore issues of bullying and harassment in virtual member-spaces, what we find fascinating are the meticulous methods of deliberation involved. There’s a brace of institutions participating - including the original nudge unit - and it’s very clear that these are (as Stanford suggests below: “the beginnings of a social contract for how these new spaces in virtual reality should be governed”.

But it’s a very managed process - 56 policy proposals were laid before the demographically determined participants from around the world (China not included), and the results demonstrated below show how people shift their positions on these, before and after the deliberation process.

It’s one way to do it… and you’d have to dig into the analysis of the overall deliberation to observer any weighting. (Is it a surprise, for example, that Meta would highlight participants’ shifts towards granting them more authority, in dealing with bullying or harassment issues?).

Extracts from the Stanford blog below:

In November 2022, Meta announced that it would launch a series of Community Forums — a new tool to help the company make decisions that govern its technologies.

The process, they explained, would “bring together diverse groups of people from all over the world to discuss tough issues, consider hard choices, and share their perspectives on a set of recommendations that could improve the experiences people have across our apps and technologies every day.”

The Metaverse Community Forum, conducted in collaboration with Stanford’s Deliberative Democracy Lab on the topic of bullying and harassment, was a first-of-its-kind experiment in global deliberation.

It sets an example for thoughtful and scientifically based public consultation on an emerging set of issues posed by new technology. In doing so, it shows that global deliberation is entirely feasible and could be applied to other public policy issues of global interest.

There are innumerable emergent “worlds” being set up in virtual reality. What are the ground rules of behavior in those worlds going to be? What protections need to be offered to those who participate? By whom? Who should be responsible? This project offers answers through thoughtful and representative public consultation.

In this project, scientific samples of the world’s social media users were recruited for a weekend-long deliberation from 32 countries in nine regions around the world speaking 23 languages. A matching control group of comparable size did not deliberate but took the same questionnaires in the same time period in early December 2022.

The issue is a novel and important one: how to regulate bullying and harassment in virtual reality, particularly in the new private or “members only” virtual spaces that are being created in the Metaverse.

More than 6,300 deliberators, representative of global social media users (with the principal exception of China), were selected by 14 survey research partners who recruited respondents who deliberated in 23 languages. For more on the design and for the weighting of the sample that supports inferences to the global population of social media users, see the Methodology Report

The design for the deliberations followed the Deliberative Polling® model under the direction of the Stanford Deliberative Democracy Lab (housed at the Center on Democracy, Development and the Rule of Law, part of the Freeman Spogli Institute for International Studies at Stanford University). The project was a collaboration with Meta and the Behavioral Insights Team (BIT).

A distinguished Advisory Committee vetted the briefing materials for the deliberations and provided many of the experts for the plenary sessions. The process alternated small group discussions and plenary sessions where competing experts would answer questions agreed on in the small groups.

The agenda was a series of 56 policy proposals that could be implemented by Meta or other platform owners. The proposals came not only with background materials but also with pros and cons posing trade-offs that the participants might want to consider. Video versions of the briefing materials were also provided.

The small group discussions were conducted on the Stanford Online Deliberation Platform, which moderated the video-based discussions, controlled the queue for talking, nudged those who had not volunteered to talk, intervened when there was incivility, and moved the group through the agenda of policy proposals and their pros and cons.

Near the end of each discussion, it also guided the groups in formulating key questions that they wished to pose to the panels of competing experts in the plenary sessions. The Stanford Online Deliberation Platform is a collaboration between the Crowdsourced Democracy Team, led by Ashish Goel, and the Deliberative Democracy Lab, both at Stanford University.

The core issue posed for deliberation was the responsibility of platform owners such as Meta for regulating behavior in the multitude of private or “members only” worlds being formed in the Metaverse. To what extent should the platform owners stay out, since these “members-only” spaces are joined by mutual consent, and the participants may want privacy?

Or to what extent do the platform owners, such as Meta, have a responsibility to act to protect against bullying and harassment, particularly since the metaverse is an immersive reality in which bullying and harassment may have severe consequences? If the platforms have a responsibility, what should they do? These are novel issues, and they amount to the beginnings of a social contract for how these new spaces in virtual reality should be governed.

What Should Be Done?

The before and after questionnaire results provide guidance. They tell us that “platform owners should have access to video capture in members-only spaces.” This proposal rose significantly from 59% to 71%, an increase of 12 points (means on the 0 to 10 scale rose from 6.814 to 7.253).

Further, there was a significant increase in support for “spaces where there is repeated bullying and/or harassment, platforms should take action against creators.” This rose about 10 points, from 57.3 to 66.9% (means from 6.39 to 6.901 on the 0 to 10 scale).

What actions should they take? First, those spaces should be made less visible to users, according to 63% of the deliberators (up from 53%, an increase of about 9.5 points). Second, such spaces should no longer be publicly discoverable, providing a real disincentive to creators who want to grow their membership but who have permitted repeated bullying and harassment.

Third, “about spaces where there is repeated bullying and/or harassment, creators should be required to take a course on how to moderate the spaces they create.” Support for this proposal rose from 67% to 78%, a rise of 11 points. 

Again, for the members-only spaces where there has been repeated bullying and harassment, users should receive notification of such cases when entering a space. Receiving notification when entering rose more than nine points from 76% to 85%. 

But the recommendations from the global sample are not punitive. The sample declined to endorse more severe punishments. For example, support for removing members-only spaces where there is repeated bullying only reached 43% (up from 39%).

Support for banning creators from making additional members-only spaces if there was repeated bullying and harassment only reached 45% (up from 38%.) Support for banning creators of such spaces from inviting additional people to join only reached 49% (up from 43%). Lastly, support for preventing creators of such spaces from making money off their spaces only reached 54% (up from 49%).

A full range of questions were also posed for public spaces. There was less concern for privacy in public spaces, so the role of the platform in regulating and protecting against bullying and harassment was clear from the outset but also rose significantly with deliberation.

More here.