# ‘Weighting’ opinions — how to get close(ish) to perfect information as a lazy person

Note: None of the below content originates from me. This is just a compilation of things I’ve learnt from more educated thinkers than myself, most prominently Robert Wiblin, who introduced me to the concept, Simon Grant, my behavioural economics professor who formalised a lot of Rob’s concepts in class, as well as Flint O’Neil and Louis Becker, who routinely point out when I’m radically misapplying rational choice/probability theory.

I face a constant dilemma as a person who is interested in policy but is also super lazy — I’d much prefer to spend an evening blowing things up in Just Cause 3 than reading Productivity Commission reports. I’d like to be able to have some way of synthesising a lot of passively-absorbed information — newspaper reports, opinions of PhD/otherwise educated friends, et cetera — but obviously none of these sources are super credible, so it’s problematic to comprehensively base my opinions thereon. My proposed solution: heuristic speedy judgements of credibility so that you can ‘update’ your opinion in light of evidence of variegated quality.

A quick digression into explaining Bayesian updating (if you’re familiar with probability theory, skip to ‘All well and good’). The basic idea is figuring out how likely a given statement is to be true when you see some evidence for or against that statement. For instance, I might be trying to figure out what the likelihood is that a person I meet is an irritating Marxist ideologue (IMI). At ANU, approximately 20 per cent (or 0.2) of the student body are IMIs, so there’s an 0.2 chance that a randomly selected student is an IMI. But then say I realise they’re carrying an undergraduate anthropology textbook. If it’s their textbook, then they’re studying undergraduate anthropology, so there’s a 90 per cent chance that they’re an IMI. But it might not be their textbook! What if they’re just carrying it for a friend? Or what if they’re taking it to the annual ‘holy shit Marxism is dumb’ book burning? In that case, they’re probably no more likely to be an IMI than a randomly selected ANU student! So I do the following calculation: given that they’re holding this textbook, what is the probability they’re an IMI? Doing a quick bit of probability theory, we let:

$\mathbb{P} [I] := \{$probability that this student is an IMI $\} = 0.2$
$\mathbb{P} [\neg I] := \{$probability that this student is not an IMI$\} = 0.8$
$A := \{$this student is carrying an anthropology textbook$\}$
$\mathbb{P} [I | A] := \{$probability of being an IMI given that they’re carrying the anthropology textbook$\}$

If that notation is annoying or confusing just follow the general intuition: we figure out the probability of them being an IMI and also holding the textbook, and then divide it by all the ways they could be holding the textbook in total. That way, we can figure out the probability of us seeing our thesis being true (this student is an IMI) given that we’ve seen some evidence that it’s true (they are holding the textbook).  Using the literal beast of probability theory, we apply Bayes’ Rule and get:

$\mathbb{P}[I|A] = \frac{\mathbb{P}[A\cap I]}{\mathbb{P}[A\cap I]+\mathbb{P}[A \cap\neg I]} = \frac{\mathbb{P}[A|I] \times \mathbb{P}[I]}{\mathbb{P}[A|I]\times \mathbb{P}[I] + \mathbb{P}[A| \neg I] \times \mathbb{P}[\neg I]} = \frac{0.9*0.2}{0.9*0.2+0.1*0.8} = 0.6923$

Why is this number important? Well, note that our other probabilities for this person being an IMI were 0.2 if we consider them as a randomly selected student, which is way too low, or 0.9 if we considered that they were holding the textbook, which is way too high. If we were really not in the mood for a Marxist diatribe, it’s possible that 0.9 is sufficiently high we’d want to just ignore them entirely, but ~0.7 is low enough to warrant being polite. So when we use Bayes’ Rule, we get a comparatively better metric than either our probability from seeing the textbook or our probability from them being a randomly selected student.

All well and good, I hear you say, but how does this apply to policy? Isn’t this calculation too annoying to compute every time someone tells me something? That’s the thing — probably. It’s not immediately intuitive when you give someone a list of probabilities how they should interpret that information. This is where probabilistic ‘weighting’ comes in.

Notice in the Bayes’ rule equation that we have some ‘prior’ belief about the state of the world — our $\mathbb{P}[I]$ in this case. We can treat that as more-or-less exogenous (determined outside the evidence given). Our ‘posterior’ belief — the probability $\mathbb{P}[I|A]$ in this case — is the probability that statement  is true given our new information .

But we can simplify this whole equation down — we don’t need the whole fractional representation. That’s good for specific probabilistic questions, but all we want to know when someone tells us something is ‘should I believe this statement more, less, or the same given what you’ve just told me?’. And we can get this through the simple difference $\mathbb{P}[A|X] - \mathbb{P}[A| \neg X]$. This gives us an approximate direction and magnitude of how we should move our opinion: in the anthropology student example, the difference would be 0.6 — so we should ‘up weight’ our belief that the student is an IMI after this information.

Notice that it works in reverse. Andrew Bolt generally endorses shit policy and degrades good policy (*cough* a price on carbon *cough*), so for the statement $X := \{$ policy $X$ is good $\}, A := \{$ Andrew Bolt says that the policy is good $\}$ we would have something like:

$\mathbb{P}[A|X]=0.6, \mathbb{P}[A| \neg X]=0.4$
$\Rightarrow \mathbb{P}[A|X] - \mathbb{P}[A| \neg X] = -0.2$

So the number is negative when you should ‘down weight’ your beliefs about the world and positive when you should ‘up weight’ your beliefs, and how ‘far away’ the number is from zero gives you how roughly how much you should move your beliefs.

Why is this a good system? Well, for a start, it doesn’t contradict the obvious best-practice method of obtaining policy opinions — reading reports, studies and papers on the given policy. If the policy is actually good, then the probability the reports will find it to be good are high-ish, so $\mathbb{P}[A|X] - \mathbb{P}[A| \neg X]$ is probably close to 1 for some generic good piece of evidence. But the real beauty is that it allows you to incorporate large amounts of decent-to-low quality information. If you consistently hear from multiple PhD candidates or multiple media sources* that a given policy is good, then you would be slightly ‘up weighting’ your opinion each time, as opposed to the alternative policy — that each of those sources would ‘encourage you to go read some reports’ (which, let’s be real, very few of us will do). Finally, it allows you to actively incorporate anti-credible sources (meaning that they say the wrong thing on a routine basis), which simply ‘going and reading papers’ does not necessarily do.

So as good as it would be if we all had the time/energy to consistently go forth and conquer the academic literature surrounding policy, as an alternative for the lazy among us, consider using a ‘weighting’ heuristic instead. So long as you can figure out the reliability of a given authority, you don’t need any more computation than a simple subtraction. Believe things with ‘0.1 weight’, and then have that weight shifted up or down depending on the concise -1 to 1 difference $\mathbb{P}[A|X] - \mathbb{P}[A| \neg X]$.

*A technical note here: this only actually applies if the PhD candidates or media reports are ‘statistically independent’, so ideally check that there are multiple/different primary sources on the media reports or that the PhD candidates at the very least come from different universities.

Advertisements