Thursday, 14 July 2011

Basic Rationality Training

I guess it's well past time to report on Anna's sessions. I would summarize them as "basic rationality training," and they encompass a wide variety of skills, which together tend to produce more accurate thought and more productive conversation.

Asking for examples was the first one. Humans seem to think much more accurately with examples than with abstract ideas. Example: your friend goes around saying "Harry Potter is stupid". You could interpret this in many ways: the book itself is stupid, or he doesn't like the book, or the character is stupid, or the character does stupid actions sometimes, etc. What a lot of people will do here is argue with whatever interpretation jumps to their mind first. Instead, what you should do is ask for an example. Maybe your friend will say "well, the other day I was walking down the hallway and someone jumped out from a door and shouted 'avada kedavra' at me, and I was like 'really?'." Now you probably understand what interpretation he means, and you won't go arguing about all the smart things that Harry does in the books (or in the Methods of Rationality).

Noticing rationalization and asking for true causes of your beliefs: I've talked about these, in "How to Enjoy Being Wrong." Briefly, when someone says something which you disagree with, ask yourself why you disagree -- what experience you had which led to your belief.

Fungibility: when you notice you're doing an action -- perhaps someone asks you "why do you do X?", ask yourself what goals the action is aiming for, then notice if there are other ways of achieving those goals. If you feel resistance to a new strategy, it is likely that you actually did the action for some other reason -- go back and try to optimize that goal separately.

Example: I read science fiction novels. What goals does this serve? Imagining the future is attractive; spending time with a book in my hands is pleasant; I want to follow the plot.

To achieve the "imagining the future" and "follow the plot" goals, I could go read spoilers of the novel on the Internet. But I feel resistance to that strategy. Hmm. I might miss something? That's probably not it... I guess I just get some sort of ego-status by knowing that I read the whole book without skipping anything. And this makes me realize that I read books because I want to seem well-read.

Anyway, if you do this procedure on a regular basis, sometimes you'll notice actions which don't make much sense given the goals you're optimizing for. When you do, you can make a change and optimize your life.

Value of Information and Fermi Calculations: Do math fast and loose to determine whether you're wasting your time. One of the most useful pieces of math is how much a piece of information is worth. For instance, if I'm trying to start a company selling pharmaceuticals on the internet, I want to know what the regulations will be like, and I don't see an easy way to estimate this just from what I know. I would have to do research to figure this out. But I can estimate the size of the market -- maybe 100M people in the US who take drugs regularly, and lots of drugs cost well over $1 a day, so $100M/day, or $50bn/year. My business sense tells me that regulations are likely to be the main barrier to entry for competitors (there's so much incentive for the existing players to put up barriers that they've probably done it).

Let's do out the probabilities:

  • Target: 10% chance of regulatory burden being surmountable by a startup

  • 25% chance of me actually deciding to try and execute on the idea, given that the regulatory burden seems surmountable

  • 5% chance of me succeeding at getting 1% of the estimated market ($500M/yr), given that I decide to execute on the idea

  • 25% of the company is an estimate of what I will own if I succeed

So the expected value of getting this information appears to be at least $100,000, if it can actually establish that 10% probability. I can probably obtain this information for a lot cheaper than that, so I should go look up regulatory burdens of starting an online pharmaceutical company.

Obviously this analysis has a lot of flaws, but it still seems very useful.

Sunk cost fallacy and self-consistency: Noticing when you are making decisions based on sunk costs or a previous decision you want to be consistent with. Poker example: I was bluffing on a hand, someone raised me, I was tempted to reraise in order to be consistent with that bluff. Non-poker example: Driving down the street to the grocery store, I realize I want to go to the bank first, because I will have ice cream in the trunk on the way home. Even though it would be faster to go to the bank first, and I don't risk my ice cream melting, I am already going in the direction of the grocery store, and I don't want to turn around now.

That's most of what we've covered. It's really useful and applicable material. Even if you understand the theory based on reading, it seems like I've learned the material better by being forced to come up with examples on the fly.

No comments:

Post a Comment