DIY Asymmetric Weapons With Symmetric Weapons And Bayescraft

Epistemic status: Follow-up to this post. Fairly well considered, few hours total epistemic effort. Substantially more confident than before that this is correct, but still feel very ick about it.

An asymmetric weapon is any strategy that has a higher probability of winning p(Win) if it is aligned with one side its axis of asymmetry than with the other. In other words, p(Win | X) > p(Win | ~X). This is exactly the formula for Bayesian evidence in favor of X. Asymmetric weapons are just tools that happen to serve both as Bayesian evidence for some particular X and also as tools to change the world.

But if that’s all an asymmetric weapon is, that means we can make our own! Just staple some reasoning that provides evidence to whatever symmetric weapon you want, and it becomes asymmetric. The policy “use propaganda to advocate for climate action only in the case that climate change is supported by the evidence” is asymmetric. It will win more often in the case that climate change is real than if it is not. The condition on evidence provides the asymmetry, and the propaganda provides the weapon. Easy as pie.

In my previous post on symmetric and asymmetric weapons, I left off by concluding that using symmetric weapons may inevitably be the optimal strategy. I think I failed to think in sufficient detail there, which means I missed an important point, and now there’s two disconnected blog posts on the topic instead of just one 🙁

The core question this whole thing has been trying to answer is roughly “is it a more winning strategy to use asymmetric weapons that will course-correct me if I’m wrong, or to use symmetric weapons and just trust my reasoning that I’m right?” What this suggests is that the latter policy is actually asymmetric! The “trust my reasoning” part introduces an asymmetry on truth – an agent following this policy will be significantly better at fighting for truth on average, because if they don’t believe they are aligned with the truth they won’t fight, and (hopefully) their reasoning abilities will let them detect this correctly more often than random. The two are practically identical – it’s just that the asymmetric strategy gives up some world-affecting power in exchange for more evidence of correctness, and the symmetric strategy does the opposite.

Now let’s consider some implications. First the scary one. Consider one of the important quotes from Scott Alexander’s original post on symmetric weapons:

Unless you use asymmetric weapons, the best you can hope for is to win by coincidence.

https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/

An important point of my last post was that setting up norms that encourage asymmetric weapons is still a symmetric strategy. In other words, the only way to get to the point where asymmetry will let one side win more often than mere coincidence still involves using symmetric weapons. And that means it’s still totally a matter of coincidence which side starts off with the power or insight or luck to set up those norms first. Really, this all boils down to a very simple anticlimax. Either we live in a world where the good guys end up winning, or we don’t. Asymmetric weapons can’t change that. Either way, the best you can hope for is coincidence.

So where does this leave us? Our hope that asymmetry would always help us win more often than just coincidence appears to be a bust. Setting up norms that systematically favor asymmetric weapons requires the use of symmetric weapons. But using symmetric weapons in combination with Bayesian reasoning replicates the effect of an asymmetric weapon. The choice between the two boils down to the choice between marginal bits of knowledge or power. And we can’t really use the Traditional “good guys don’t fight dirty” heuristic anymore.

There is no known procedure you can follow that makes your reasoning defensible.

There is no known set of injunctions which you can satisfy, and know that you will not have been a fool.

There is no known morality-of-reasoning that you can do your best to obey, and know that you are thereby shielded from criticism.

https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science

This leaves us exactly where we started. Do your best to figure out the truth, the right thing to do, the most effective way to make the world better. Don’t find your allies based on the weapons they use, but based on the reasoning they use to guide them. If their justifications for where they point their weapons seems asymmetric towards truth, beauty, and butterflies, great. If the justification is indistinguishable from the rhetoric and seems utterly symmetric or even asymmetric in a bad direction, not-as-great.

To torture a metaphor – the lightsabers are all the same color, but even a dark hooded figure that can make a truth-seeking case for their position should be more convincing than a white robed counterpart spewing nonsense and malice.