The Opening World: An Open Anniversary Review, Part 1

Walls holding back information sharing and participatory decision-making have been breaking down over the past few decades. Many readers will question this claim, basing their fears on recent developments in politics, disinformation, and social disintegration. But I hold on to my conviction that our world is getting more open, and I’ll examine where it’s going in this two-part article. The article is the culmination of a year-long “Open Anniversary” series on the Linux Professional Institute blog. Previous installments in the series are:

This first part of the series defends the cause of openness against critics who blame it for current social and political problems. I try to locate more appropriate targets for this criticism.

Is Openness Dangerous?

There’s plenty to lament in what we see online, apparently spiraling out of control: rampant conspiracy theories, the plethora of criminal activity on the “dark internet,” and more. Some people stretch their criticisms too far, though. I’ve heard uninformed and defamatory statements like, “The internet is causing polarization” and “The internet helps lies to travel quickly.” When we evaluate technologies, we have to think carefully. What precise technologies are we talking about? Who is using them, and how are they being used?

Such questions become even more complex because the combination of personal digital devices and near-universal networking also hands tools to spies and to governments trying to curtail their population’s behavior.

The internet actually is still doing what it did all along, starting from the supposedly golden age when it brought people together around the world and provided safe spaces to discuss stigmatized issues such as gay and lesbian behavior, recovery from child abuse or drug use, non-neurotypical experiences, and so forth. So many topics are now part of the public discourse—just look at recent commitments to address sexual harassment in the workplace, for instance—that were first aired in internet communities.

One goes to meetings today where people say, “I’m on the autistic spectrum” or “I’m a victim of child abuse” or “I spent five years in prison” or “My pronouns or they and them” without shame or stigma. There has to be a connection here; we forget how much more of an open society we have become since the internet.

As data looms in importance, the internet is keeping up as a resource for the marginalized. One recent example, NativeDATA, tailors health information to native North American peoples, who suffer from a lot of health problems related to their environments and social status.

From the beginning, too, there was plenty of evidence that the internet had some pretty nasty corners. Illegal trade, hate speech, and wanton lies were known problems. Attempts to separate the good from the bad started quite some time ago—remember the Communications Decency Act of 1996—but always floundered on the dilemma that different people had different ideas about what was good and bad, and ultimately people realized that they didn’t want to hand the decision over to any authority.

It is a tribute to the spirit of the early internet that major social media companies—while investing millions of dollars to take down harmful content—show reluctance to crack down further, and democratic governments are moving cautiously in defining standards (notably the Digital Services Act package in the European Union). For instance, although the EU wants social media sites to label and remove content that is manifestly dangerous, the regulators want transparency in such removal and clear explanations about why it’s removed. The regulators are also sensitive to excessive demands on social media sites.

Things have taken a turn for the worse during the past decade, so far as I can see, but the problem is not the internet: it is the services built by companies such as search engines and social media. A recent working paper by Suran et al. on “collective intelligence” points to the problem. Successful collective intelligence (related to the ideas of crowdsourcing and the wisdom of crowds) requires two traits: diversity and transparency. The internet is quite capable of fostering these values, but social media works against them.

Regarding diversity, the preference by search engines and social media to display items similar to what one has previously “liked” or clicked on creates the bubbles so often criticized by observers. And the algorithms, of course, are quite opaque. The companies can’t afford to be transparent about what they do because revealing the algorithms would make it easier to game their systems. But the problem demonstrates that we need something different from social media for serious discussions and “news.”

Some people also claim that social media tends to inflame the discourse, arousing fear and hate. I’m not convinced this is true. People on social media joyfully pile on to express their approval for positive things such as births, marriages, degrees earned, awards, and promotions. Let’s just say that social media is designed to evoke emotions instead of cautious consideration, and leave it at that.

I love social media. Like billions of people, I use it to keep up with old college friends, share my pleasures and pains with them, and connect my colleagues with common interests. Social media was designed for that and does it superbly.

Social media introduces risks when people use it to exchange “news,” organize political engagement, or function as a public space in other ways. Those tasks are better served by completely different tools—offline and online—that foster thoughtful debate and intensive research. There are models for such spaces. They use some of the same superficial mechanisms as social media does, such as groups and ratings. But the public spaces deliberately engage interested people in working together to solve their problems. Positive results with broad consensus are their goal.

These platforms can be run by governments, companies, or non-profits. One example is a partner of the Linux Professional Institute, SmartCT in the Philippines. To learn more about such solutions, I recommend the work of law professor Beth Noveck, for whom I have written and edited several projects. Her latest book is Solving Public Problems: A Practical Guide to Fix Our Government and Change Our World.

The second part of the article will offer a case study in openness and a bunch more examples.

<< Read the previous post of this series | Read the next post of this series >>

About Andrew Oram:

Andy is a writer and editor in the computer field. His editorial projects at O'Reilly Media ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. Andy also writes often on health IT, on policy issues related to the Internet, and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, USTPC.