It’s common to complain about seeing family with extreme political views during the holidays. While some advocate for cutting ties with people who think differently than they do, I actually enjoy arguing and talking about political philosophy. It’s become a load-bearing way for me to bond with some of the in-laws.
I’m lucky enough that our arguments are usually minor. Often they’re about factual questions that everyone is mildly unsure of. How effective is this one vaccine? Does learning math help with other life skills directly or indirectly? How nervous should we be about bird flu?
But sometimes, the arguments are much more fundamental. This Christmas I found out one of my younger relatives has become an avowed Marxist.
I would be interested in arguing about the pros and cons of different economic systems. I think there’s a lot of benefit from the tension between free market capitalism and redistributive socialism. When it comes to straight-up Marxism, though, I think the last 150 years are fairly conclusive. Between the famines, genocides, and gulags, Marxism has been used to justify the deaths of hundreds of millions of people. The failed economic policies led to the starvation of tens to hundreds of millions more.
My relative didn’t believe any of that actually happened. Some events she didn’t know about, others she thought were western imperialist propaganda. When I listed statistics, she mostly said she didn’t believe me.
I encouraged her to do research and learn about it from resources that she thinks she can trust. What makes me think that she’ll come to the same conclusion as me though, or trust sources that I think are actually trustworthy? I think we need better tools to have these kinds of disagreements; tools for adversarial collaborations.
How do you know what you know?
I (for obvious reasons) think that what I know is right. I think I’ve learned real things about the impacts of Marxism over the past hundred years. Many of those things I learned from books written by Americans. Some written by non-Americans. I’ve also talked to people from Poland and Russia about what life was like for them before the Soviet Union fell. That was much more about bread-lines and asking the government to go to college in the 80s, not so much genocide or acute starvation.
At best I have second hand accounts of things like China’s great leap forward. For example, I listened to that Dwarkesh interview with Jung Chang, which was pretty damning. It’s not like I read Chang’s book though, or even any other books written by any Chinese person who lived through it. When I thought about sending that interview to my relative, I wondered if she’d just think it was propaganda promulgated by the US.
So I looked it up, and the Wiki article on Jung Chang says that she overplayed how bad Mao was. I don’t doubt that Chang lived through what she claims, and I bet it was as bad for her as she says it was. The statistics on deaths from the Great Leap Forward don’t come from her. But this is an unexpected crack in the edifice of my knowledge. How much did Chang mislead me? Is the wiki article overplaying Chang’s inaccuracies due some Wikipedia bias? How far will I follow a trail to see who can be trusted to report on the object level questions?
Almost all of what I think about Marxism (the famines, the gulags, the vast murder campaigns) still stands. But how do I know for sure? And how would I convince my relative if she thinks my information sources are adversarial (or even just corrupted)?
On the one hand, we’re starting from a good position. I think we’re both interested in the truth of what happened. We have very different priors, just based on our family history and economic experience up to now. I think that we can come to a consensus about what data is available and how much to trust it if we put in the effort.
Adversarial Collaboration
Sometimes it can be a lot of effort to vet sources, collate their information, and come to some synthesis of what’s being said that can be relied on. Other times the effort is not what prevents it. The only reason I even bothered to look up Jung Chang’s reliability is because I was thinking about sending her interview out as an example. Even that 1 minute on Wikipedia was too much for me to put in before I had a challenger to my worldview. Whether its about effort or priorities, people will often not check up on things they think they know. How can we make it worth people’s while to actually look things up, and to actually get things right?
Years ago, Scott Alexander hosted an Adversarial Collaboration contest. He recruited people who disagreed on some point and had them write an article about it together. The idea was that, since the multiple authors disagreed with each other, their biases would cancel out. I really liked this idea in principle, and I think something like it would be great at getting people to come together and take their beliefs seriously.
I think the small nature of Scott Alexander’s projects are part of what made it work. You might argue that social media, or the internet in general, is a giant adversarial collaboration. The problem with the internet is that it’s too big. People are social, tribal. If the argument is with a specific person who you think will actually listen to you, you’ll put in a lot of work. If the argument is with “the internet” and you’re unlikely to convince anyone, then why put in the effort to think about your own beliefs or understand those of others?
We need more small scale adversarial collaborations. We need more people working together, disagreeing amicably with each other, trying to figure out what’s right.
I would love to see a tool that makes adversarial collaborations easy to run. These collaborations should be between just a couple people, about specific questions, and support both public and private communications.
Why not prediction markets
Lots of people love prediction markets these days. Even Scott Alexander seems to have abandoned Adversarial Collaborations in favor of prediction markets. I think this is unfortunate, because to me these serve very different use cases.
There are a lot of questions about the world for which answers are basically known. Humanity, as a whole, knows how electricity and magnetism interact. We know how big the average great white sharks are. We know the earth is round and the sun performs fusion and why the sky is blue.
We know these things, but any individual maybe doesn’t. A prediction market isn’t useful for these things, because the problem isn’t a distributed uncertainty in the answer but rather the individual effort of finding it and caring about it.
People sometimes try to force prediction markets to work for individual issues. I’ve bet on markets for whether someone will read a certain book, go to the gym a certain number of times, or believe a certain thing on a certain date. These kind of work, but the markets are always small and don’t seem very emotionally activating for anyone involved. Questions like “will I read this book on time?” are a better fit for beeminder than prediction markets. Questions like “will I believe Columbus was the first European in the New World?” are a better fit for Adversarial Collaboration.
When the knowledge is already out there, we just need the motivation and the tools to use that knowledge. It’s not about wisdom of the crowds, it’s about improving the wisdom of the individuals.
Adversarial Collaboration as a Service
I want a tool that makes it easy for people to have adversarial collaborations about any given topic.
This tool needs to help people identify their disagreement. Once identified, they need to operationalize it. It’s no good for me and my relative to take “capitalism good”/”Marxism good” positions. We need to actually get concrete about what that means to us. Maybe frequency of famines in countries with either system. Or maybe number of political prisoners, or how easy it is to buy a car or go to school or get healthcare. The actual operationalization doesn’t matter as far as the tool goes, but the tool needs to make it easy to come up with something both people can agree on.
Once the crux of the disagreement is formalized, the tool should help guide participants to answering it for themselves. This can take the form of pinboard of links to informational sources. Maybe with some commentary from each arguer.
I don’t think the tool should be prescriptive about sources. Podcasts, books, journal papers, some guy on twitter; they’re all fair game. But any claim one arguer makes should be easy to refer to, refute, and question for the other arguer. People should just use whatever they like to research their question. If people bring up unconvincing sources, their collaborator can tear them down. That will push them to look for something better.
I’m imagining something like a graph of questions. The root node is the operationalization of the original argument. Then child nodes could be supporting evidence, either sources or reasoning. Each node would support commenting on it directly, as well as adding child nodes supporting or questioning it. Any node where both arguers agree could be colored green, while nodes that are still not agreed on would be colored yellow.
Arguers would make comments, post sources, argue in all of the same ways that they normally would online. The tool would structure those arguments so it was easy to refer to and go back to what’s uncertain still. There wouldn’t be social media’s infinite scroll of temporary factoids. Instead, there would be a slowly converging set of information that either arguer could add to at any point.
Automated supporting interactions
There are a lot of ways that I see modern LLMs being able to support these interactions. I’m imagining that a lot of the argumentation happens via text. Little tweet length quips to long-form essays arguing small points in context of the larger argument. An LLM integrated into the Adversarial Collaboration tool could watch all of these and make recommendations.
It would be easy, after the arguers agree on an operationalization of their main disagreement, for an LLM to decompose it into smaller questions to look into. An LLM could also help propose cruxes when the arguers are first trying to operationalize things. When smaller questions are identified, the LLM could look things up and suggest a list of sources (both pro and con for any question), allowing the arguers to review them and say whether they agree.
Obviously the arguers would be free to do whatever work they wanted independently of a supporting LLM. They could search out any sources and include them. Write up any arguments on their own, and decompose questions however they think best. The LLM would really just be to support any of that.
I think keeping the LLMs input minimal and gentle will be very important to the success of the project. This isn’t meant as a research helper, or an answer engine. It’s meant as a way for people to interact with others. It’s primarily a social tool in service to an informational mindset that would be developed in users. Too much LLM interaction would degrade that.
I do think it would be useful to gently nudge users to be nicer to each other. Or to respect each other more. Doing this without seeming sanctimonious or censorious would be tricky, but hitting the right balance would do a lot to keeping users engaged instead of enraged.
Why would people put in all this work?
One way that I think LLMs could really contribute would happen before the Adversarial Collaboration even started. They could connect people who disagree.
There are millions of people posting about all kinds of topics on the various social networks every day. Imagine an LLM reading these, finding people who seem like they would trust each other, but who disagree on something they each find important. Imagine that LLM then setting up an Adversarial Collaboration for them and inviting them to join it.
The idea here is that this tool wouldn’t be something users go out and look for. This tool would go out and look for users.
“Check out this other person,” it would say. “They disagree with you about this thing. They might seem foolish, but look! This post of theirs shows you might be able to convince them of how right you are. Join this adversarial collaboration to build knowledge on the topic you love.”
It harnesses the “someone is wrong on the internet” energy that we all have. It also makes it more personal and actionable.
When the person on the other end of the social network is a rando who might hate everything you stand for, it’s easy to round their disagreements off to 0. When it’s someone who’s similar to you and who you might chat with at length, it becomes harder to ignore whatever they say.
Tuning the LLM to search for similar people who disagree on specific issues will be critical to getting this social balance right.
Why would people join in the first place?
The actual reason that someone might agree to this depends completely on the person and the question. By assumption, the tool is offering these collaborations to people who could be open to changing their mind. On some level, these people care about being right.
Maybe they’ll invest based on their new opinion and make money. Maybe they’ll change their diet and get healthier. Maybe they’ll change how they interact with their family, and have a happier life. There are a lot of questions where having a realistic model of the world does make your life directly better, and helping people to answer those questions well will have immediate impacts.
This works even for the people who are more right to begin with. Say you’re convinced that vitamin C keeps you from getting sick. In order to convince someone who disagrees, you may have to learn about how much, when to take it, cases where it doesn’t have an impact. Even if you and your collaborator come away with largely your original opinion, you will have a much better and likely more actionable set of information after the fact.
Adversarial collaboration is valuable
While being right about things feels good, making good decisions is its own reward. From a societal perspective, increasing the number of people who can make good decisions is fantastic.
There’s another individual benefit that I think will accrue here. People like to talk a lot about “crises of meaning” and “the loneliness epidemic”. It’s hard to make friends. One of the common recommendations to make a really good friend is to do a project together, and this kind of collaboration might be a great fit.
I honestly think most Adversarial Collaborations wouldn’t lead directly to great friendships, but some would. The ones that don’t would hopefully help people engage with others in a more honest and accepting way (especially if the soft-touch tips from the LLM help the way I hope they will).
Adversarial Collaborations have all kinds of side effects that I like. They connect people to each other about things they care about. They provide motivation to learn, and to learn how to learn. They help address factual questions that are already kinda settled, just not for everyone.
They may also help fix the information environment we find ourselves in. There are a lot of experts around, but it’s hard to identify who to trust and who’s just faking it. Adversarial Collaborations can help surface the true experts and submerge the charlatans. If collaborators agree to share their output (which would be opt-in), they could provide “trustedness” scores to different sources.
There’s one important difference between these “trustedness” scores and the kind of fact checking that happens today. The Adversarial Collaboration “trustedness” score would just be a metric of how much most people do trust a source, after thinking about it and arguing with someone about it a lot. This is very different than some reporter or politician handing down the trustworthyness of someone from on high.
But even if the wider social improvement don’t end up as large as I hope, winning friends and influencing people might just be enough.