Crap. Missed my deadline again on weekly updates. Apologies.
I'll review more of what we've been doing in sessions over the last week or two. It's been some more epistemic rationality tricks -- that is, tricks for knowing the right answer. (This is as opposed to "instrumental rationality" or tricks for achieving what you want.)
Accessing your anticipations: Sometimes our professed beliefs differ from our anticipations. For instance, some stuff (barbells and a bench) disappeared from our back yard last week. I professed a belief that they weren't stolen. I wanted to believe they weren't stolen. I said I thought it was 40% likely they weren't stolen. But when someone offered me a bet at even odds as to whether some other cause of their disappearance would arise in a week, my brain rejected the bet: Emotionally, it seemed that I would probably lose that bet. (Consistency effects made me accept the bet anyway, so now I'm probably out 5 dollars.)
When your professed beliefs differ from your anticipations, you want to access your anticipations, because that is what is controlling your actions. We have learned some tricks for accessing your anticipations -- imagining (or being actually offered) a bet, which side would you prefer to be on? Another trick is imagining a sealed envelope with the answer, or your friend about to type the question into Google. Do your anticipations change what answer you expect to see? One more trick is visualizing a concrete experiment to test this belief. When the experiment comes up with a result, are you surprised whether it goes the way you "believe" it goes, or are you surprised when it doesn't?
The naïve belief-testing procedure: when you're wondering about something, ask yourself the following questions: "If X were true, what would I see? If X weren't true, what would I see?"
I'll use the weights-stolen example again. If the weights were stolen, I would expect to see the weights missing (yes); that *all* the weights were stolen (in fact, they left the kettlebell and a few of the weights); that the gate had been left open the previous night (undetermined); that other things were stolen too (also not the case). If the weights weren't stolen, I would expect them to come back, or to eventually hear why they were taken (I didn't hear anything); to learn that the gate had been locked (I didn't learn this).
This can help you analyze the evidence for a belief. It highlights the evidence in favor of the belief, and does an especially good job of making you realize when certain bits of evidence aren't very strong, because they support both sides of the story. (Example: when testing the belief "my friend liked the birthday present I gave him," you might consider the evidence "he told me he liked it" to be weak, because you would expect your friend to tell you he liked it whether or not he actually liked it.)
It is called "naïve" for a reason, though: it doesn't make you think about the prior probability of the belief being true. Consider the belief "the moon is made of Swiss cheese." I would expect to observe the moon's surface as bumpy and full of holes if it were made of Swiss cheese, and I would be less likely to observe that if it weren't. I do observe it, so it is evidence in favor of that belief, but the prior probability of the moon being made of cheese is quite low, so it still doesn't make me believe it.
In order to better understand the effects of priors and bits of evidence, we've been studying the Bayesian model of belief propagation. You should read An Intuitive Explanation of Bayes' Theorem by Eliezer, if you're interested in this stuff. I'll just point out a few things here.
First, we model pieces of evidence (observables) as pertaining to beliefs (hidden variables). When determining your level of belief in an idea, you start with a prior probability of the belief being true. There are many ways to come up with a prior, but I use a fairly intuitive process which corresponds to how often I've observed the belief in the past. I could use my own experience as the prior -- "When people give me a small gift, I like it only about 30% of the time." So for someone else liking my small gift, I could start with the prior probability of 30%.
If I don't have past experience which would lead me to a prior, I could start with some sense of the complexity of the belief. I haven't been to the moon. "The moon is made of cheese" is a very complex belief, because it requires me to explain how the cheese got there. "The moon is made of rock" is much simpler, because I know at least one other planet which is made of rock (Earth). I might still have to explain how the rock got there, but at least there's another example of a similar phenomenon.
Once you know your priors, you consider the evidence. You can formulate evidence in terms of likelihood ratios, or in terms of probabilities of observing the evidence given the belief. In either case, you can mathematically transform one to the other, and then mathematically compute the new degree of belief (the new probability) given the prior and the likelihood ratio. (The math is simple: if your prior is 1:3 (25%) and your evidence was 10 times more likely to be produced if the belief is true than if the belief were false -- 10:1 -- then you multiply the odds (1:3 * 10:1 = 10:3) and end up with a posterior odds ratio of 10:3, or 10/13, or about 77%. (If this didn't make sense, go read Eliezer's intuitive explanation, linked above.)
We practiced this procedure of doing Bayesian updates on evidence. We explored the ramifications of Bayesian evidence propagation, such as the (somewhat) odd effect of "screening off": if you know your grass could get wet from a sprinkler or the rain, and you observe the grass is wet, you assign some level of belief to the propositions "it rained recently" or "the sprinkler was running recently". If you later heard someone complaining about the rain, since that was sufficient to cause the wet grass, you should downgrade the probability that the sprinkler was also running, simply because most of the situations which would cause wet grass do not contain both rain and sprinklers.
The idea of screening off seems rather useful. Another example: if you think that either "being really smart" or "having lots of political skill" is sufficient to become a member of the faculty at Brown University, then when you observe a faculty member, you guess they're probably quite smart, but if you later observe that they have lots of political skill, then you downgrade their probability of being smart.
Subscribe to:
Post Comments (Atom)
I've seen lots of sprinklers on in the rain :-(
ReplyDeleteAnd, just for the record, I like the gifts you've given me 78% of the time (not counting gifts before age 12).