The third session of mind-charting: I'm starting to think I've gotten as much as I'm ever likely to out of this exercise: I really do feel that I'm making up explanations for some of my actions for the sake of making my diagram fit the theory. However, I am quite glad I did the exercise, as I seem to have discovered several things about my motivational system about which I was not previously fully aware.

The afternoon session was Eliezer teaching more rationality, and giving an impromptu talk on probability theory and, specifically, Bayes Theorem. The latter, I think, could have been better explained - Eliezer did not have much time to prepare - but did include one interesting idea I hadn't seen before, more on that later. The first section was entitled "How not to win an argument". We did the following exercises:

First, for avoiding motivated skepticism, practice setting a standard of proof for the position you want to maintain so high that you cannot be proved wrong:

e.g. "I don't need glasses unless... I can't distinguish red traffic lights from green... I notice a problem in my everyday life... I am unable to read...".

After you've practised this skill, DON'T DO IT.

We also played an updating game: starting by estimating our probability for the proposition "Sweden is the best country to live in in the world" and then updating our estimate for this probability based on slowly reading through the wikipedia article.

Then there was the probability, which included one useful trick, which was illustrated with the following example: consider three barrels with 10 balls in each. There are two with 9 green balls and one red, and one with 5 of each. Prior to picking a ball, you are equally likely to be in front of any barrel.

One way to think about this problem: imagine units of credibility assigned to each statement - we currently assign 2 units of credibility on the "I'm in front of a mostly green barrel" side of the scale and 1 on the other side. If we pick a red ball, this dilutes our support for "mostly green" by 1/10" and dilutes our support for "mostly red" by only 1/2, so our new odds ratio is 2*1/10:1*1/2= 1/5:1/2 = 2:5.

This is not exactly the way Eliezer presented it, but it is a useful way of thinking about Bayesian updating that I hadn't come across before.

Of course, if you're doing these problems in real life, you're probably still much better of with natural frequencies - I think this was clearly established by Eliezer's second example, which was a standard disease testing example, which I personally found quite a lot harder using odds ratios than natural frequencies, but that might be because I'm used to using the latter.

There were poker lessons in the evening, which mostly seemed to consist of a brief presentation on how humans have a tendency to see patterns in noise, followed by playing poker for the next 4 hours... I came out $2 up after 4 hours play.

There is now a fairly serious conversation going on in the kitchen over the question of whether, in the Marvel Universe, you need a PhD in order to be allowed to battle good or evil: this conversation includes several RBCers and the president of SingInst. It does seem like the sort of thing you could just look up (in fact, I just did... Magneto doesn't have a PhD), but it's pretty indicative of the sort of thing we tend to be talking about.

## Wednesday, 15 June 2011

Subscribe to:
Post Comments (Atom)

I started using odds ratios regularly a couple months ago, and I have a better understanding of Bayes updating as a result. I agree that if a situation is easily framed as a sampling problem, like in a diagnostic context, natural frequencies are better. For a complex proposition that wouldn't appear in a probability textbook like "Minimum wages increase unemployment", odds ratios give a better feel for how credence shifts.

ReplyDelete