Saturday 13 August 2011

RBC review exercises

Well, I'm at the end of Rationality Mega-Camp. They seem to have saved a lot of the good stuff for last -- the sessions this week were astoundingly helpful, and I expect the stuff I've done this week to be stuff worth doing at least every few months.

On Monday we did a great exercise called "Mapping your rationality strengths and weaknesses", and on Wednesday we did one called "Using RBC to create a good life."

Mapping your rationality strengths and weaknesses:

For each of the below subskills, analyze it with respect to:
  • When do you use it in your personal relationships?
  • When do you use it in your career? (work, schooling, startups, long-term planning)
  • When do you use it to understand yourself?
  • When do you use it to model abstract issues or the outside world? (politics, the economy, existential risk, etc.)
  • What's the biggest obstacle to using it more?
  • Brainstorm at least 5 simple tasks to help you do this subskill more.
Subskill 1: Actually wanting an accurate map. Want an accurate map more than you want to believe your map is accurate, your past actions are justified, or your opinions are respectable.

Subskill 2: Use fungibility. (The procedure where you ask what goals an action serves; ask what other ways you can think of to achieve those goals; check for resistance if you find what looks like a better way.) Do this procedure often, and find that it leads you to better plans and better paths to executing those plans.

Subskill 3: Bother to form models of the world. Be curious. Be specific and ask for examples. Have anticipations ("What would I see, if X were true? What would I see, if X were not true? Which do I see?") Write down your predictions, and update your calibration.

Subskill 4: Know your own motives. Have a moment-to-moment awareness of your own emotions and the motivations guiding your thoughts. Notice rationalization. Notice fear. Notice lack of curiosity.

Subskill 5: Keep your eyes on the prize. Focus your effort on the issues most relevant to your goals, notice when you're getting on a tangent, and asking "is this the right path?"

Subskill 6: Take ideas and reasoning seriously. Expect the world to make sense, realize that nothing works or fails by magic, trust in explicit reasoning, and use arithmetic in your daily life.

Subskill 7: Act conscientiously. (I added this one myself.) When you notice a failing, make a plan to correct it immediately; when you make a plan, execute on it; walk towards the darkness (the areas you feel weakest); constantly search for hidden aversions and ugh fields.

My answers to the rationality map

Some of this stuff is quite private, but I'll tell you the stuff that's not too private. My ratings are out of 10 where zero is I don't do this subskill at all, and ten is I do it perfectly.

Actually wanting an accurate map -- relationships 4; career 7; self 9; world 8. After several tries to understand and model my own behavior, I've gotten great positive feedback for when I modeled myself more correctly, and so I really want to understand myself well. Lying to yourself isn't very productive. So I rate myself an 8 at this. The weakest one is obviously my personal relationships, especially for my friends; I enjoy the company of my friends, but I seem to not want to discover flaws in them, or reasons that I shouldn't be friends with them. Biggest obstacle to wanting it more is not wanting to imagine the consequences of having an accurate map because of ugh fields. Task to improve: include regularly imagining my ugh fields in my weekly routine.

Using fungibility -- relationships 2; career 6; self 4; world 4. Applying this procedure is far from a regular occurrence in all areas of my life; in the below exercise I applied it to my career and it worked well. Biggest obstacle to using this more is laziness. Adding it to my weekly routine.

Bother to form models of the world -- relationships 3; career 4; self 5; world 7. Weak on relationships because of not wanting the accurate map. Strong on world because that is where predictions come up regularly, so lots of opportunities to test. I do sometimes adjust my own behavior because of my self model and it does sometimes work. Biggest obstacle is actually making and writing predictions. I'm joining the Good Judgment Project which should help with world, and I'm attempting difficult motivational challenges in my career path which should help with self also.

Know your own motives -- relationships 4; career 7; self 5; world 6. Weak on relationships for above reasons. Strong on career: I can easily enumerate the tradeoffs among career choices for me. Low on self because reflection is hard. Biggest obstacle is reflection. Next task is following the self-reflective steps, especially the data-gathering ones, in the excellent luminosity sequence.

Keep your eyes on the prize -- relationships 3; career 5; self 4; world 6. Weak in relationships because I easily get distracted by positive or negative emotions. Strongest on world due to the opposite of that. Biggest obstacle is non-equanimity with regard to outcomes of these decisions. Next task might actually be meditation; I don't have a better idea there, but meditation is supposed to train equanimity.

Take ideas and reasoning seriously -- relationships 6; career 6; self 8; world 8. I have the sense that all of these are modelable and that good models would produce decent predictions; my belief is substantially less strong about my personal relationships and my own career path for some reason. Biggest obstacle is thinking that my career is subject to luck, even though I know it's not really true. Next task is to convince my elephant of this, I guess try IFS on myself.

Act conscientiously -- relationships 3; career 5; self 6. Weak on relationships because I'm afraid to offend people. Strong on self; I do actually do some of the reflective things I say I will do. Medium on career because I do take some actions, but probably later than I should. Biggest obstacle is overcoming social fear ugh field. Next task is to stare at and write about my social fear ugh field (it's on my monthly task list but I will do it this weekend).


Using RBC to create a good life:

1. List the major components of your upcoming life. (Example: do reading; write papers; play video games.) Run the "use fungibility" procedure on each.

2. List the major unknowns in your upcoming life. (Example: which career to go into; which grocery store to shop at; how much time to spend talking to professors.) Run a quick value-of-information calculation on each one.

3. Write your hypothetical character assassination, and then make a plan to counter it:

a. Suppose you were looking at another person, Bob, who is exactly identical to you. Suppose you found out that in the year after leaving Rationality Boot Camp, Bob didn't do anything remarkable at all. Explain why Bob's failure was totally predictable, and what aspects of Bob's skillset (and skills gas) should have made everyone predict that failure.

b. Write each missing skill (that you invoked to explain Bob's failure) at the top of a blank sheet of paper. Under that heading, list the components of that skill, and then the sub-components, and perhaps the sub-sub-components... continuing as far as you need to continue until your list is filled with mundane, visibly accomplishable tasks.

c. If you're feeling any despair, talk to someone about it. Brainstorm, plan, visualize, anticipate, and problem-solve until you both (1) understand the faults you listed in part a, and (2) actually expect that you'll be able to succeed well beyond the level of success your pre-RBC self would have had (e.g., you'll be able to make $1M over the next five years if you aim for money).


I'm not going to give you my results here, because they're very private. I'll tell you about them if you're interested, but I'd rather do it in person. If you want to have an in-person meeting, email me and we'll set it up.

I think this will be my last post here. I will keep my personal blog alive, and probably put a decent amount of rationality content up there. In fact, I've added "write a blog post" to my weekly tasks.

Sunday 7 August 2011

Conscientiousness

I'm trying to figure out how to train conscientiousness.

When I imagine how a formidable rationalist would act, I imagine a particular superpower: achieving the things he or she sets out to do.

Part of this superpower is the skill of planning: choosing what goals to achieve, breaking them down into subtasks which are achievable, and so on. I'm not going to talk about planning now, though that seems important.

Instead, I'm going to talk about another required skill: conscientiousness -- following through on your plans. This means doing all the steps, not just the ones you feel like doing; doing them in a timely fashion, before your plans go stale; cutting down on procrastination and other forms of akrasia; and keeping your effectiveness high even when your motivation flags.

Since these appear to have synergistic effects, I categorize them all under "conscientiousness." Some synergistic effects: if you're working efficiently, you will quickly finish things you don't want to do, so you can get onto more fun tasks. Checking things off of a list produces positive feedback and improves your motivation level.

My own recent successes:
  • I am most of the way through signing up for cryonics, which is a stupid long and painful process involving talking to lots of people and waiting for bits of paper to go through the mail system. (Lots of people decide that they want cryonics in general, but that they don't want it now, or something like that, when in reality, if they're young, they can get it now for less than $400/year.)
  • I adopted about six new small daily habits: planning every day; brushing my teeth consistently in the morning; doing Anki and N-back; taking a multivitamin.
  • I use to-do lists regularly and they make me get all the little things done that I used to waste time on.
  • I created a better model of my own procrastination / time-wasting habits, and apply it regularly to reduce these behaviors: I browse Reddit or Hacker News when I'm "bored" of whatever I'm doing, but I've changed my default response to boredom from "do something fun and idle" to "go figure out why you're bored".
I still have a long way to go, but there were a few fundamental insights which caused me to put everything together and start developing this model.

The first useful model I built was when I read about ego depletion: the idea that self-control is a resource which can be used up. The main thing I learned from this was just that self-control isn't all-powerful in my own head. When I was younger, I used to have the belief that I was very much in control of my own actions and that if I decided I wanted to do something difficult, I just had to try hard enough. I was disabused of this when I tried to stop biting my fingernails through sheer willpower (I still have never solved this habit), or tried to work on a boring project for a long period of time. I was amazed at how much I was able to rationalize why I couldn't work on it "today" every day. I could summon the willpower to start working on it, but I couldn't maintain it for a long period of time without a different motivational structure.

The second useful piece of the model was my brain as a set of interconnected agents, each with its own needs and goals. This came from Rationality Boot Camp and it came to me through the sessions about The Elephant and the Rider -- IFS and mind-charting (a topic which I intended to, but never wrote about -- whoops). Basically, my brain is not unified in its goals, but instead it has lots of agents. I should individually optimize those agents so they don't block me from my higher level goals.

The third useful piece of the model comes from positive affect and conditioning. I can make myself want to do something by getting positive feedback when I do that thing, so if I am trying to find out how to convince myself to do something, I better figure out why I would enjoy it, or how I could make myself enjoy doing that thing.

Specific techniques I've used:

"I'll just do it for 3 minutes" -- I think I read this one on Hacker News. It only takes a bit of self-control to convince yourself to start doing a task if you know it'll only be for a short time. Once you're doing it, maybe you'll actually enjoy it and want to keep doing it. This works for me when I have a complicated programming or administration task I don't want to do, because those sorts of things are hard to motivate myself to start doing, but once I start doing it, I don't usually want to stop.

"Imagine the goals it serves" -- as Anna calls it, the "use fungibility procedure". I wrote about this before, but I'll say it again, because it comes up a lot: notice you're doing some action, figure out what goals it serves, figure out if there are other ways to achieve those goals, and then check for resistance along the new plan you've conceived. This is useful even if you're pretty sure you're achieving your goals optimally -- it is quick and it allows you to be more likely to notice options that might have just opened up, and it also makes sure you know where you're going with anything you do.

When I notice someone (including myself) saying "you should probably X" or "I should really Y", I now have a really strong affordance where I ask "when will you do that?" Tons of people seem to decide they SHOULD do something that they never really do. Automatically asking "when?" has two purposes: it asks you to commit to doing a thing which is beneficial; and it helps you notice your own bullshit, when you don't actually intend to do the thing you say you "should".

Related: asking "what's the next action?" for all your goals, all the time. (Once you've figured out the next action, figure out when you'll take that step.)

"Planning is good": You can actually achieve a lot by thinking, being strategic, analyzing your goals, figuring out other routes to achieving them, and so on. An hour a day of planning is a fair bit more than most people do, and depending on their goals, lots of people would benefit from well over that. Basically, it seems like opportunities appear all the time, but you have to regularly write down your goals to notice them. I don't think I'm averaging anywhere near an hour a day yet, but it seems like a goal to achieve. (What's the next action? Spend an hour today meta-planning!)

Keeping a notebook: I write every day in my notebook. I take five minutes every morning to plan -- write down my short, medium, and long-term goals. The notebook serves as my general scratch pad for planning too. I rarely reference anything in my notebook, but having something on paper while I'm planning serves to greatly clarify my thoughts. (Don't know if it's better than random sheets of paper -- it's more viscerally satisfying I guess, and I don't have to go hunting for pen and paper.)

Making a to-do list. When I notice I have more than a couple things to do in the day, I write them all down in one place in my notebook and check them off. It seems much easier and more satisfying than doing it in my head. (Current goal: when I start school, start a proper Getting Things Done inbox, which collects everything I need to do in one place. Next action: buy a damn box.)

If you have any ideas about how you've learned the skill of conscientiousness, please share them in the comments.

Monday 1 August 2011

Pancakes

Last Sunday at 10pm, I made some pancakes. The experience was fascinating because, having been thinking about this sort of thing quite a lot for the past few weeks, I think I can describe in some detail what led to me making pancakes, why the pancakes didn't turn out as well as they could have, and why this didn't bother me in the slightest.

Why did I make pancakes at 10 o'clock at night? Well, earlier that day, I had been out with a couple of my housemates, and we had decided to buy chocolate peanut butter (incidentally, it turns out this is not a good substitute for Nutella on pancakes) and lemon juice. Come 10 o'clock, I didn't have any particular desire to eat pancakes, and the other people I'd been out with earlier that day also did not seem interested. However, not only had we decided that we were going to make pancakes, we had also gone to the effort of stopping at a supermarket and picking up relevant condiments. Clearly, I was the sort of person who was going to make pancakes on Sunday. Equally clearly, it would have been a waste of time to buy chocolate peanut butter and lemon juice if I didn't subsequently make pancakes.

So, I started the pancake-making process. I used a recipe from the internet, because I didn't trust myself to remember exactly the proportions in which the relevant ingredients should be added. The recipe was the top-rated recipe on a fairly large internet site, and presumably large numbers of people had successfully used it before. However, there was an individual comment in the comments section underneath the recipe which claimed that the instructions were in the wrong order. Failing to properly weight statistical evidence, and giving into my instincts to weight personal anecedotes too heavily, I decided to follow the instructions posted in the comment.

About 5 minutes into the process of making pancake batter, I realised that everything was coming out way more lumpy than I would have liked, and that I didn't have an electric whisk, or even a normal whisk. At this point, I realised that I had probably made a mistake by following the non-standard recipe, and considered the option of throwing away the batter I had made so far, and starting again. However, that would have meant that I had wasted all the time I had put into making pancakes so far. Also, that I had been wrong to trust the person who made the comment on the original recipe. I persevered.

About 20 minutes into the process of making pancake batter (due to my sub-optimal recipe choice, and my unwillingness to sink my sunk costs, the process of making the batter ended up taking about half an hour) I was washing up the sieve that I had used to get rid of some of the lumps in the batter (the recipe choice was seriously sub-optimal) when another of my housemates came into the kitchen and thanked me for doing the washing up. I'm not quite sure whether it was because of a consistency effect, or some form of reciprocity, but 10 minute later, I had washed all of the dishes in the kitchen, and was ready to continue with the pancake batter...

I actually noticed that my actions were probably being guided by consistency effects a few minutes into starting the process of making batter. However, my brain was very able to come up with a variety of reasons why continuing to make the pancake batter was a good idea (I had by now promised pancakes to another housemate, I was not doing anything else anyway...). Similarly, I managed to justify not throwing the batter away once I knew it was ruined, even after considering the fact that I was almost certainly considering the time, and eggs that I'd put into things so far as a cost.

As I said, I have spent the last two months thinking in a fair amount of detail about this sort of thing, and it seems to have gotten me as far as being able to catalogue the motivations for my actions after the fact. It is perhaps worth noting that I had no introspective access to any of these motivations. Rather than introspection, I used the technique of looking at what I was doing, and considering the question "what might have caused a person who had been through the same experiences as me recently to be doing this thing?". I'm not sure how far I'm ever likely to get in reducing the impact these sorts of biases and bad heuristics have on my cognition. I'm not sure if I should be trying to train myself out of them, or just learning to notice them so that I can harness them for achieving more strategic goals. However, I'm fairly sure that being able to spot them is a good first step, so I am getting somewhere...

Sunday 31 July 2011

Fashion trips

Social effectiveness seems like a skill with incredibly high returns on investment. I recognize that many of my problems achieving my goals stem from fear of social situations: I'm afraid to talk to people and afraid to ask them for things. Partially I worry about making a bad impression, but mostly I think I just have some intrinsic fear of people.

Social effectiveness is not just about talking to people: being reliably able to convince people of things during conversations, such as your level of intelligence, attractiveness, social effectiveness. If you can effectively charm rich people, you can probably get much farther in achieving your goals; if you can effectively charm attractive people you can have a more successful sex life.

We're trying lots of ways to improve social effectiveness -- fashion being the one which I will focus on today. The way you dress provides lots of bits of data about your personality, and so you should carefully select your clothing to provide the bits you want to provide. Seeming more attractive is always better (not just in dating, but in business too!) Fashion can also make you seem conscientious (if you were put together carefully) or higher status.

We had our second fashion field trip this weekend, and so it seemed like time to write a post.

A few weeks ago we had sessions on what makes good fashion. Wendy already wrote about the content, so I won't go too much into depth there.

Luke and Hugh are the fashion instructors. To give you a sense of their fashion style, Luke usually wears designer jeans, a designer T-shirt or monocolored button-down shirt, belt, and shiny shoes. He also spikes his hair. Luke's look is designed to be imposing and impressive -- he's six foot four, and his fashion seems designed to accentuate this. It is considered fairly mainstream, though the spiked hair is nontraditional; he also sometimes wears a shiny belt buckle and leather wristband, which push him a bit towards the "rocker" category.

Hugh doesn't have nearly as consistent of a style. The first day he showed up, he was wearing basically all white -- white button-down shirt, white khaki pants, white belt, with black military boots. His hair is long and dyed black with red highlights, and he wears it either draped to his shoulders or tied into a ponytail. Besides the all-white outfit, he's also worn an all-black suit (black jacket, trousers, shirt and tie), and other interesting outfits that I don't remember very well. He is considered "very goth".

The rest of this post is me describing things I've bought. I am not really willing to put lots of silly photos of myself online, so you'll have to settle for descriptions. Do Google Image Search if you're unsure of a term.

On the first trip, we went to Haight St. in San Francisco, which is quite famous for being a fashionable place to shop. There are a lot of interesting clothes shops there. Much of what you can find on Haight is alternative fashion rather than mainstream fashion, but there's stuff for everyone. There's a steampunk store (Distractions), a military surplus store (Cal Surplus), indie (Ceiba), goth (New York Apparel); there are several thrift stores, shoe shops, and random other fashion stores that I didn't go into.

I went to Distractions first and tried on a full outfit there -- a stretchy black pinstripe shirt with leather accents, black pinstripe pants, black top hat, studded belt and leather wristband. It was pretty awesome, but I would never wear it. Someone said it could be a good clubbing outfit, but I don't really go clubbing.

However, I liked the pinstripe pants and belt enough to buy them. My belt is actually super awesome; it's made of four separate leather components connected by rings. I've gotten lots of compliments on it. The pants are fairly muted by themselves, and go quite well with colorful shirts and shoes. They are a little too warm to wear in the summer though.

The military surplus store was another place where I got some good stuff. I tried on military boots and work shirts. The military boots are basically thick black boots with lots of lacing, and sometimes a zipper down the side. I liked the style, but didn't find boots that fit. I did find a boring black work shirt. Some people got these awesome black commando turtleneck sweaters -- acrylic sweaters with interesting features like epaulets on the shoulders.

I picked up some tight-fitting shirts (Henley and v-neck tees) at American Apparel and thrift stores; I wanted casual shirts that look good, and I found some at these places.

Separately from the fashion trips, I picked up some Converse high-top fashion sneakers, which have served me rather well. They're cranberry colored and they go super well with most of my outfits. I think that was one of the better fashion choices I've made.

Anyway, last weekend was another fashion trip, this time to the Union Square Mall in downtown San Francisco. This was a much more mainstream-oriented trip; we went to big chain stores like Express, American Eagle, and Guess. Express seems to be a great place for mainstream fashion; I liked a broad variety of their styles, and it was stuff that fit me. I ended up getting a cranberry colored button-down shirt to go with my Converse. I also got a new pair of dark jeans -- most of my jeans were medium dark blue, but these are darker, and more on the gray side.

At Guess, Luke handed me a shirt and told me to try it on. I did, and it turned out to be a really awesome shirt: it was a medium light blue button-down shirt which fit me very well, with epaulets on the shoulders. I wasn't planning to buy anything else but this shirt fit me well enough, and I really liked the style, so I bought it.

I tried a shirt at Hugo Boss. The style and fit were fabulous, but it was a $175 shirt, so I didn't buy that one. Someday, maybe.

Saturday 23 July 2011

Anticipations and Bayes

Crap. Missed my deadline again on weekly updates. Apologies.

I'll review more of what we've been doing in sessions over the last week or two. It's been some more epistemic rationality tricks -- that is, tricks for knowing the right answer. (This is as opposed to "instrumental rationality" or tricks for achieving what you want.)

Accessing your anticipations: Sometimes our professed beliefs differ from our anticipations. For instance, some stuff (barbells and a bench) disappeared from our back yard last week. I professed a belief that they weren't stolen. I wanted to believe they weren't stolen. I said I thought it was 40% likely they weren't stolen. But when someone offered me a bet at even odds as to whether some other cause of their disappearance would arise in a week, my brain rejected the bet: Emotionally, it seemed that I would probably lose that bet. (Consistency effects made me accept the bet anyway, so now I'm probably out 5 dollars.)

When your professed beliefs differ from your anticipations, you want to access your anticipations, because that is what is controlling your actions. We have learned some tricks for accessing your anticipations -- imagining (or being actually offered) a bet, which side would you prefer to be on? Another trick is imagining a sealed envelope with the answer, or your friend about to type the question into Google. Do your anticipations change what answer you expect to see? One more trick is visualizing a concrete experiment to test this belief. When the experiment comes up with a result, are you surprised whether it goes the way you "believe" it goes, or are you surprised when it doesn't?

The naïve belief-testing procedure: when you're wondering about something, ask yourself the following questions: "If X were true, what would I see? If X weren't true, what would I see?"

I'll use the weights-stolen example again. If the weights were stolen, I would expect to see the weights missing (yes); that *all* the weights were stolen (in fact, they left the kettlebell and a few of the weights); that the gate had been left open the previous night (undetermined); that other things were stolen too (also not the case). If the weights weren't stolen, I would expect them to come back, or to eventually hear why they were taken (I didn't hear anything); to learn that the gate had been locked (I didn't learn this).

This can help you analyze the evidence for a belief. It highlights the evidence in favor of the belief, and does an especially good job of making you realize when certain bits of evidence aren't very strong, because they support both sides of the story. (Example: when testing the belief "my friend liked the birthday present I gave him," you might consider the evidence "he told me he liked it" to be weak, because you would expect your friend to tell you he liked it whether or not he actually liked it.)

It is called "naïve" for a reason, though: it doesn't make you think about the prior probability of the belief being true. Consider the belief "the moon is made of Swiss cheese." I would expect to observe the moon's surface as bumpy and full of holes if it were made of Swiss cheese, and I would be less likely to observe that if it weren't. I do observe it, so it is evidence in favor of that belief, but the prior probability of the moon being made of cheese is quite low, so it still doesn't make me believe it.

In order to better understand the effects of priors and bits of evidence, we've been studying the Bayesian model of belief propagation. You should read An Intuitive Explanation of Bayes' Theorem by Eliezer, if you're interested in this stuff. I'll just point out a few things here.

First, we model pieces of evidence (observables) as pertaining to beliefs (hidden variables). When determining your level of belief in an idea, you start with a prior probability of the belief being true. There are many ways to come up with a prior, but I use a fairly intuitive process which corresponds to how often I've observed the belief in the past. I could use my own experience as the prior -- "When people give me a small gift, I like it only about 30% of the time." So for someone else liking my small gift, I could start with the prior probability of 30%.

If I don't have past experience which would lead me to a prior, I could start with some sense of the complexity of the belief. I haven't been to the moon. "The moon is made of cheese" is a very complex belief, because it requires me to explain how the cheese got there. "The moon is made of rock" is much simpler, because I know at least one other planet which is made of rock (Earth). I might still have to explain how the rock got there, but at least there's another example of a similar phenomenon.

Once you know your priors, you consider the evidence. You can formulate evidence in terms of likelihood ratios, or in terms of probabilities of observing the evidence given the belief. In either case, you can mathematically transform one to the other, and then mathematically compute the new degree of belief (the new probability) given the prior and the likelihood ratio. (The math is simple: if your prior is 1:3 (25%) and your evidence was 10 times more likely to be produced if the belief is true than if the belief were false -- 10:1 -- then you multiply the odds (1:3 * 10:1 = 10:3) and end up with a posterior odds ratio of 10:3, or 10/13, or about 77%. (If this didn't make sense, go read Eliezer's intuitive explanation, linked above.)

We practiced this procedure of doing Bayesian updates on evidence. We explored the ramifications of Bayesian evidence propagation, such as the (somewhat) odd effect of "screening off": if you know your grass could get wet from a sprinkler or the rain, and you observe the grass is wet, you assign some level of belief to the propositions "it rained recently" or "the sprinkler was running recently". If you later heard someone complaining about the rain, since that was sufficient to cause the wet grass, you should downgrade the probability that the sprinkler was also running, simply because most of the situations which would cause wet grass do not contain both rain and sprinklers.

The idea of screening off seems rather useful. Another example: if you think that either "being really smart" or "having lots of political skill" is sufficient to become a member of the faculty at Brown University, then when you observe a faculty member, you guess they're probably quite smart, but if you later observe that they have lots of political skill, then you downgrade their probability of being smart.

Thursday 14 July 2011

Basic Rationality Training

I guess it's well past time to report on Anna's sessions. I would summarize them as "basic rationality training," and they encompass a wide variety of skills, which together tend to produce more accurate thought and more productive conversation.

Asking for examples was the first one. Humans seem to think much more accurately with examples than with abstract ideas. Example: your friend goes around saying "Harry Potter is stupid". You could interpret this in many ways: the book itself is stupid, or he doesn't like the book, or the character is stupid, or the character does stupid actions sometimes, etc. What a lot of people will do here is argue with whatever interpretation jumps to their mind first. Instead, what you should do is ask for an example. Maybe your friend will say "well, the other day I was walking down the hallway and someone jumped out from a door and shouted 'avada kedavra' at me, and I was like 'really?'." Now you probably understand what interpretation he means, and you won't go arguing about all the smart things that Harry does in the books (or in the Methods of Rationality).

Noticing rationalization and asking for true causes of your beliefs: I've talked about these, in "How to Enjoy Being Wrong." Briefly, when someone says something which you disagree with, ask yourself why you disagree -- what experience you had which led to your belief.

Fungibility: when you notice you're doing an action -- perhaps someone asks you "why do you do X?", ask yourself what goals the action is aiming for, then notice if there are other ways of achieving those goals. If you feel resistance to a new strategy, it is likely that you actually did the action for some other reason -- go back and try to optimize that goal separately.

Example: I read science fiction novels. What goals does this serve? Imagining the future is attractive; spending time with a book in my hands is pleasant; I want to follow the plot.

To achieve the "imagining the future" and "follow the plot" goals, I could go read spoilers of the novel on the Internet. But I feel resistance to that strategy. Hmm. I might miss something? That's probably not it... I guess I just get some sort of ego-status by knowing that I read the whole book without skipping anything. And this makes me realize that I read books because I want to seem well-read.

Anyway, if you do this procedure on a regular basis, sometimes you'll notice actions which don't make much sense given the goals you're optimizing for. When you do, you can make a change and optimize your life.

Value of Information and Fermi Calculations: Do math fast and loose to determine whether you're wasting your time. One of the most useful pieces of math is how much a piece of information is worth. For instance, if I'm trying to start a company selling pharmaceuticals on the internet, I want to know what the regulations will be like, and I don't see an easy way to estimate this just from what I know. I would have to do research to figure this out. But I can estimate the size of the market -- maybe 100M people in the US who take drugs regularly, and lots of drugs cost well over $1 a day, so $100M/day, or $50bn/year. My business sense tells me that regulations are likely to be the main barrier to entry for competitors (there's so much incentive for the existing players to put up barriers that they've probably done it).

Let's do out the probabilities:


  • Target: 10% chance of regulatory burden being surmountable by a startup

  • 25% chance of me actually deciding to try and execute on the idea, given that the regulatory burden seems surmountable

  • 5% chance of me succeeding at getting 1% of the estimated market ($500M/yr), given that I decide to execute on the idea

  • 25% of the company is an estimate of what I will own if I succeed



So the expected value of getting this information appears to be at least $100,000, if it can actually establish that 10% probability. I can probably obtain this information for a lot cheaper than that, so I should go look up regulatory burdens of starting an online pharmaceutical company.

Obviously this analysis has a lot of flaws, but it still seems very useful.

Sunk cost fallacy and self-consistency: Noticing when you are making decisions based on sunk costs or a previous decision you want to be consistent with. Poker example: I was bluffing on a hand, someone raised me, I was tempted to reraise in order to be consistent with that bluff. Non-poker example: Driving down the street to the grocery store, I realize I want to go to the bank first, because I will have ice cream in the trunk on the way home. Even though it would be faster to go to the bank first, and I don't risk my ice cream melting, I am already going in the direction of the grocery store, and I don't want to turn around now.

That's most of what we've covered. It's really useful and applicable material. Even if you understand the theory based on reading, it seems like I've learned the material better by being forced to come up with examples on the fly.

Tuesday 12 July 2011

Metacamp Metasession, Meta Meta Meta



It was weekend, a week and a half ago. Michael Curzi (from the minicamp) came by. He played a few hands of Poker with us, which involved him never folding anything, and after awhiles I tired of it, and took him up on a bet that was higher than I felt comfortable with. As we played, he asked us what we thought of the camp, how things had been and how we felt about things.

It turned out we had a number of complaints, ranging from impatience with PowerPoints to insufficient agency. Those of us who were awake talked late into the night, devising more efficient ways to present material, more engaging forms of discussion, and more effective bonding exercises. Weeks of productivity training had well indoctrinated in us the value of small, immediately-available actions. "Next actions," we called them. We resolved to take actions to resolve our complaints, in the form of calling a meta-meeting to voice our thoughts and propose our ideas.

We thought hard about when to call the meeting. Jeremy checked and found the schedule empty for Tuesday afternoon's session (this would have been for the afternoon of July 5th), so I wrote a draft of an email inviting everyone to join us for a meta-meeting during, as opposed to after, Tuesday's afternoon session. Lincoln proposed scheduling for Tuesday evening, a less imposing time, but John favored occupying the session so that we could set a precedence of scheduling ourselves when there is nothing scheduled. I myself preferred afternoon, to take a position of authority in scheduling, and so that if the organizers already had something planned, they would be forced to interact with us as though we were organizers too. We took a vote and decided on Tuesday afternoon.

Immediately, there was trouble. There was already something scheduled for Tuesday afternoon. Even worse, Jasen did not get home until Tuesday night, and that was no good at all, because it was really important that such a relevant discussion have Jasen in it. Anna wrote back saying she was doing sessions on Tuesday, Thursday, and Friday. We rescheduled for Wednesday afternoon. By that time, Andrew would have been the only person still out of town. Unbeknownst to us, we displaced Anna's Wednesday session, but I guess it was all the better we thought there wasn't one, or I might have been too much worried about inconveniencing Anna to displace her session. On Tuesday, we worked out an outline of the general areas we intended to cover, and who would lead each part of the discussion.

Preparing for a meeting of any significance is quite terrifying. I was to open the meeting, I was to communicate a brief summary of why the meeting was called, and that it was important. I rehearsed several times what I was to say, but each time, what I'd rehearsed the previous time went into a black hole of never-was-and-never-had-been. 15:00 came and went, and we had, in addition to Jasen and Anna, a number of other instructors with us as well. Blake opened a shared Google document where we could take notes during the meeting, and several of us opened the document on our laptops. At ten minutes past the hour, even though we were still missing two people, we started the meeting.

After a brief opening introduction, John spoke about our current roles as approximately students taking classes. He recalled conversations with Jasen where Jasen had been eager to hear and receptive of his ideas. He then envisioned a camp where we, the participants, were more like coordinators ourselves, agents who altered the camp instead of passively receiving it.

Then, Blake led a discussion about different ways to discuss topics and possibly learn more about them. Many of the proposed changes involved more engaging activities, smaller group discussions, and fewer lecture-style presentations. It seemed that some presentations had been dull or basic, assuming less intelligence of us and thereby producing uninteresting or repetitive material. Some people felt that we should try to get what we can out of each session, but I felt that our time is sufficiently valuable that it's not worthwhile to spend three hours in an uninteresting session extracting what we can. It was said that simply talking with each other is an invaluable resource made available through us being thrown together. Lincoln proposed hand signs to signal impressions without interrupting the speaker. Someone had the idea that we keep a Google document open during sessions so we could put in our thoughts if we didn't get a chance to say them out loud. There was the concern that having laptops would distract people by providing internet, but others said it helped to be able to look things up on Wikipedia during classes. We took a vote, and it turns out that we find the internet an average of 4.36/10 useful to have during class. We also voted on the usefulness of learning business, and that turned out to be an average of 6.00/10. We resolved to fill out surveys after sessions as well in order to give more feedback about sessions, and Anna gave us a list of useful things to ask on surveys: the main idea of the class, the most surprising thing, and the most confusing thing.

Next, we voted on how comfortable each person felt with sharing critiques right then during the meta-meeting, and it averaged to 5.47/10. This was worrisome, because it showed that a significant portion of people were significantly uncomfortable with sharing their thoughts, or at least not entirely comfortable. We went around the circle and asked each person what was the largest obstacle to speaking freely, and the responses fell into several clusters: unwillingness to say negative (and possibly offensive) things, the pressure of speaking when put "on the spot," feeling like an outsider not directly involved, and lacking in confidence.

At this point, we took a few minutes to go into the Google document and write down things we wished to discuss. At the end of five minutes, we went through everyone's notes, one person's at a time. We discussed the idea of having pairwise conversations with each other in one-to-one conversations, and that was 7.42/10 popular. Julian wrote that things seemed to be more organized when everyone woke up and meditated together, so we resolved to enforce getting up and starting on time. Some people wanted to exercise in the mornings instead. We decided to try 10 minutes of exercise followed by 20 minutes of meditation, instead of the usual half hour of meditation. It was also proposed that people might choose not to attend sessions if they felt it was not relevant to them, and if there was something else they wanted to do. It was decided that everyone was to attend session on time, but that once there, individuals might present to Jasen for approval an argument as to why they are better off doing something else. Thomas wanted notes on each day's activities, so we decided on having two scribes take notes each day. The topic of cooking and dishes came up, and it seemed some people were less happy than others about how much dish-washing and cleaning they were doing, so we put up a dish-washing sign-up schedule. A recurrent theme was the preference for smaller group discussions where each person is more involved, so we divided people into four groups, to be reshuffled each week. In the afternoon, two of the groups have a two-hour "Anna session" while the other two groups take turns having a one-hour "Zak session," and in the evening, they switch. Of course, the "Anna-session" is not always together, nor is the "Zak session" always led by Zak, but it was a structure that produced different sizes of groups with different levels of involvement. We committed to holding more meta-feedback meetings.

We took notes of Next Actions, since it is always easy to talk about things and not do anything. In order to avoid the Bystander Effect, we assigned specific tasks to individual people: I was to schedule people to talk to each other, John was to draft checklists, Blake and Jasen to schedule more meta-meetings, Jasen, Peter, and Jeremy to wake people up in the mornings, and Thomas to assign people to scribe each day.

At this point, most of our decisions have been implemented. We get up together, exercise, meditate, and have our small-group sessions. We have scribing and conversation schedules and have been making much more use of Google documents. We fill out a short survey at the end of every session. We do more personal project and personal study than before. Things are a bit different. Are they better? Time will tell.

But I do feel that several significant things went definitively right in all of this.

The first is that we took something we thought and used it to alter the state of the world. To me, simply the act of calling a meeting and trying to change things identifies my fellow Megacamp participants as a group who, by some combination of ourselves and our influences on each other, take real actions. I feel it is an important step from merely talking about things to enacting them. In that step, we transcended our cast role as students, becoming neither passive commentators nor theoreticians, but causal agents.

The second is that things were conducted with a great deal of dignity and respect. Rather than feeling like us against organizers, it seemed that we were all pursuing a common goal, which was to make the camp maximally effective. Therefore, it was easy to listen to everyone's ideas, examine them, and decide on next actions, instead of the all-too-common status-fight of trying to seem intelligent and shooting others down. I felt that we took each other seriously, and every instructor took us seriously, so that in general we were good about not getting offended and looking for the most effective solutions.

A final thing was that we were able to commit to specific actions and then abide by them, because it is all too easy to make a resolution and then break it. If that happened, nothing would change at all. But we have held to our decisions, and that makes progress possible.

I am very proud to be among this group of peers and instructors. I feel that this issue was handled admirably, and that we worked reasonably and constructively to resolve our areas of discontent. Here looking back on it all, I respect everyone a great deal, both the instructors and the participants for their readiness to act and their resistance to becoming entrenched in well-defined roles (as in the Zimbardo experiment). I am eager to see where the next weeks take us, and I am confident that even should things go amiss, I am not trapped. In a Nomic sense, I feel that things are (and always will be) mutable, so long as there exists the initiative to change rules.

Instead of a line of lyric, I will conclude with this colorful, irritated passage from a colorful, irritated essay:

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the Universe at last, and in every century they were proven to be wrong. It follows that the one thing we can say about out modern "knowledge" is that it is wrong.

The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. "If I am the wisest man," said Socrates, "it is because I alone know that I know nothing." The implication was that I was very foolish because I was under the impression I knew a great deal.

Alas, none of this was new to me. (There is very little that is new to me; I wish my corresponders would realize this.) This particular thesis was addressed to me a quarter of a century ago by John Campbell, who specialized in irritating me. He also told me that all theories are proven wrong in time.

My answer to him was, "John, when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."


--- Isaac Asimov, http://hermiene.net/essays-trans/relativity_of_wrong.html

Peace and happiness,
wobster109

Saturday 9 July 2011

How to enjoy being wrong

Note: This is a draft of a post which, if it turns out to be useful, I intend to post to Less Wrong directly. For now, please correct the post and add your own personal experiences and thoughts.

Related to: Reasoning Isn't About Logic, It's About Arguing; It is OK to Publicly Make a Mistake and Change Your Mind.

Examples of being wrong

A year ago, in arguments or in thought, I would often:

  • avoid criticizing my own thought processes or decisions when discussing why my startup failed
  • overstate my expertise on a topic (how to design a program written in assembly language), then have to quickly justify a position and defend it based on limited knowledge and cached thoughts, rather than admitting "I don't know"
  • defend a position (whether doing an MBA is worthwhile) based on the "common wisdom" of a group I identify with, without any actual knowledge, or having thought through it at all
  • defend a position (whether a piece of artwork was good or bad) because of a desire for internal consistency (I argued it was good once, so felt I had to justify that position)
  • defend a political or philosophical position (libertarianism) which seemed attractive, based on cached thoughts rather than actual reasoning
  • defend a position ("cashiers like it when I fish for coins to make a round amount of change"), hear a very convincing argument for its opposite ("it takes up their time, other customers are waiting, and they're better at making change than you"), but continue arguing for the original position. In this scenario, I actually updated -- thereafter, I didn't fish for coins in my wallet anymore -- but still didn't admit it in the original argument.
  • provide evidence for a proposition ("I am getting better at poker") where I actually thought it was just luck, but wanted to believe the proposition
  • when someone asked "why did you [do a weird action]?", I would regularly attempt to justify the action in terms of reasons that "made logical sense", rather than admitting that I didn't know why I made a choice, or examining myself to find out why.
Now, I very rarely get into these sorts of situations. If I do, I state out loud: "Oh, I'm rationalizing," or perhaps "You're right," abort that line of thinking, and retreat to analyzing reasons why I emitted such a wrong statement.

We rationalize because we don't like admitting we're wrong. (Is this obvious? Do I need to cite it?)

Over the last year, I've self-modified to mostly not mind being wrong, and in some cases even enjoy being wrong. I still often start to rationalize, and in some cases get partway through the thought, before noticing the opportunity to correct the error. But when I notice that opportunity, I take it, and get a flood of positive feedback and self-satisfaction as I update my models.

How I learned how to do this

The fishing-for-coins example above was one which stood out to me retrospectively. Before I read any Less Wrong, I recognized it as an instance where I had updated my policy. But even after I updated, I had a negative affect about the argument because I remembered being wrong, and I wasn't introspective enough to notice and examine the negative affect.

I still believed that you should try to "win" an argument.

Eventually I came across these Sequences posts: The Bottom Line and Rationalization. I recognized them as making an important point; they intuitively seemed like they would explain very much of my own past behavior in arguments. Cognitively, I began to understand that the purpose of an argument was to learn, not to win. But I continued to rationalize in most of the actual arguments I was having, because I didn't know how to recognize rationalization "live".

When applying to the Rationality Mega-Camp (Boot Camp), one of the questions on the application was to give an instance where you changed a policy. I came up with the fishing-for-coins example, and this time, I had positive feelings when remembering the instance, because of that cognitive update since reading the Sequences. I think this positive affect was me recognizing the pattern of rationalization, and understanding that it was good that I recognized it.

Due to the positive affect, I thought about the fishing-for-coins example some more, and imagined myself into that situation, specifically imagining the desire to rationalize even after my friend gave me that really compelling argument.

Now, I knew what rationalization felt like.

At the Rationality Mega-Camp, one of the sessions was about noticing rationalization in an argument. We practiced actually rationalizing a few positions, then admitting we were rationalizing and actually coming to the right answer. This exercise felt somewhat artificial, but at the very least, it set up a social environment where people will applaud you for recognizing that you were rationalizing, and will sometimes call you out on it. Now, about once a day, I notice that I avoid getting into an argument where I don't have much information, and I notice active rationalization about once every two days.

The other thing we practiced is naming causes, not justifications. We attempt to distinguish between the causes of an action -- why you *really* do something -- and myriad justifications / rationalizations of the action, which are reasons you come up with after the fact for why it made logical sense to do a thing.

How you can learn to recognize rationalization, and love to be wrong

These steps are based mostly on my personal experience. I don't know for sure that they'll work, but I suspect they will.

You'll do this with a close friend or significant other. Ideally they're someone with whom you have had lots of frustrating arguments. It would be even better if it's someone who also wants to learn this stuff too.

First, read these Sequences: The Bottom Line and Rationalization. Be convinced that being right is desirable, and that coming up with post hoc reasons for something to be true is the opposite of being right: it's seeming right while being wrong, it's lying to yourself and deceiving others. It is very bad. (If you're not convinced of these points, I don't think I can help you any further.)

Next, take 10 minutes to write down memories of arguments you had with people where you didn't come to an agreement by the end. If possible, think of at least one argument with this friend, and at least one argument with someone else.

Next, take 10 minutes to write down instances from your personal life where you think you were probably rationalizing. (You can use the above arguments as examples of this, or come up with new examples.) Imagine these instances in as much explicit detail as possible.

Next, tell your friend about one of these instances. Describe how you were rationalizing, specifically what arguments you were using and why they were post-hoc justifications. Have your friend give you a hug, or high-five or something, to give a positive affect to the situation.

This step is optional, but it seems like it will often help: actually work out the true causes of your behavior, and admit them to your friend. It's OK to admit to status-seeking behavior, or self-serving behavior. Remember, this is your close friend and they've agreed to do the exercise with you. They will think more of you after you admit your true causes, because it will benefit them for you to be more introspective. Again with the hug or high-five.

Next, rehearse these statements, and apply them to your daily life:
  • "When I notice I'm about to get into an argument, remind myself about rationalizing."
  • "When I notice illogical behavior in myself, figure out its true causes."
  • "When someone else states a position, ask myself if they might be rationalizing."
  • "When someone else seems upset in an argument, ask myself if they might be rationalizing."
  • "When I notice rationalization in myself, say 'I was rationalizing' out loud."
  • "When I notice I've updated, say 'I was wrong' out loud."
  • "When I say 'I was rationalizing', ask for a high five or give myself a high five."
  • "When I say 'I was wrong', ask for a high five or give myself a high five."
At the very least, read these out loud to your partner. If you want to go further, you could try using Anki to learn these statements by heart.

Regarding the high five: that's to give positive affect, for conditioning purposes. I am not sure about whether this step will work. I didn't do it and I still learned it, but I had very strong inherent desire to be right rather than to seem right and be wrong. If you don't have that desire, my hypothesis is that the high five / social conditioning will help to instill that desire.

And let me know in the comments how it goes.

Monday 4 July 2011

Internal Family Systems

Note: I (Lincoln) am posting here now instead of at my personal blog so that there can be comments attached to the post.

Another self-improvement technique we've been learning is IFS, which is another bullshit acronym. It stands for Internal Family Systems, but this has nothing to do with families. I guess it is internal to one's brain. Systems is a fluff word.

The elephant/rider metaphor is extended to indicate the presence of multiple elephants, all pulling in different directions. IFS as a procedure has you pinpoint the behavior of a particular elephant, in order to better model it, understand its behavior and drives, and possibly change it.

IFS terminology for the elephants is simply "a part [of you]". I will continue to call them elephants.

The procedure works like this: you identify an elephant that is causing you to behave in a certain way. Maybe it's annoying, maybe it makes you upset or maybe it is helpful. Then you imagine it in your brain, personify it, make friends with it and negotiate with it. All the elephats are considered to have good intentions -- they're providing you with useful data, at the very least, or averting you from pain, or whatever. The IFS process is supposed to help you align the elephants better towards achieving your goals.

If you're going to do IFS on yourself or on your friend, here are some questions you can ask:

  • Come up with an elephant to talk to. Think of something you keep doing and regretting, or a habit you'd like to change.

  • Think of a specific instance where you engaged in this behavior or thought or emotional pattern.

  • Imagine it in as much concrete detail as possible.

  • How would you feel about someone else engaging in this behavior or thought or emotional pattern at this time? Your goal is to feel curious. If you're not feeling curious, ask that feeling or concern to step aside so that you can be curious about it.

  • Now, regarding the elephant directly:


    • Personify it. What does it look like? Does it have a name? How do you experience it physically? (e.g., a prickling in the back of the neck)

    • How much does it trust you?

    • What is it saying?

    • What feelings, thoughts, and behaviors does it produce?

    • When is it active?

    • What is it trying to accomplish?

    • How long has it been around?

    • What data is it giving you?

    • Why are you grateful for this data?

    • What is it afraid of?



At this point, if you've been thoughtfully and honestly answering the questions, you should have a better model of the elephant and why it's behaving in that way. If you see a solution, try to make an agreement with the elephant, but if not, you can still gain well-being by having modeled it in this way, and maybe it won't bother you as much in the future. Or perhaps you will gain its trust.

Anyway, the procedure is pretty cute. I've used it on some friends, who reported some success after the fact, so I am inclined to believe it has some merit. At the very least, it seems generally healthy to treat your different emotions and drives as autonomous entities, as opposed to trying to suppress them. Not much luck on applying it to myself, but nothing which seems well suited for application either -- I mainly tried procrastination, but it seems better suited to emotional issues and anxieties.

Friday 1 July 2011

Wednesday 29 June 2011

Every Other Sentence Is a Lie

. . . but I won't tell whether the first statement is a truth or a falsehood.

On Tuesday of the second week, Louie and Kevin came by to teach us poker. They began by showing us a video of common errors in probability calculations that people might make during Poker. Then, we split into three tables to learn the rules and play a few rounds. We each bought in for five dollars, because several people said that it is hard to take poker seriously without real stakes.

Poker is a very silly game. It is based on the probabilities and likelihoods of certain hands coming up, with the strength of the hand correlating inversely with its likelihood. Except when the creators of the game didn't actually calculate the probabilities, resulting in some oddities. The "flush" (five cards of the same suit) is worth more than the "straight" (five consecutive numbers), even though the straight is less common. I threw a fit when this was first explained to me, demanding that we (rationalists and all) play them in the correct probability order. Everyone else promptly refused, explaining (very reasonably) that such a system would be useless almost everywhere. (Irrational people, they're ALMOST EVERYWHERE!)

Things got strange right away. Kevin went upstairs and returned with a great deal of alcohol. Arrogant as humans will be, I was all, oh, I'm just going to drink a little bit. It won't affect anything! So we drank, and played, and drank, and played, and very soon things weren't really making much sense at all, and everything was swimming about, and even though I was looking at my cards, I wasn't actually seeing them, so I'd have to look at them again a moment later, and things were generally frustrating. People kept saying things like "you should bet between half and twice the pot," or else things like "I'm sorry you lost that hand, but if it makes you feel any better, that probably was the correct way to play it =D," and everything was all very very confusing. I was being a n00b. I later learned not to spend forever and ever thinking over whether to call each time, because every bet seemed just high enough that I didn't want to take it, but just low enough that it might be worth my hand, and I would sit there wondering and wondering. As time wore on, I cached more of my previous decisions and didn't spend as much time deliberating anymore, or else I just got too drunk to do it. And then things were better.

I often hear from people, "it's like paying five dollars to become better calibrated" or something to that effect. Maybe it's true. But it also can't be that each time someone loses, they become better calibrated. I'm certain that people who have played a great deal are actually updating very little, and that their positive or negative experience of the night actually comes from the randomness in the cards.

The statements are no longer either true or false. Nearly everything at this point is a half-truth, including this statement itself.

In my very very limited experience, poker is a very unpleasant game to play. At a table with n people, one loses an average of (n-1)/n of one's hands, and this is at least half of the time. It is like getting little pings of defeat over and over again. Each time one is dealt a hand, one is hopeful, and most of the time, one is disappointed in one's hand. And therefore, even if one wins in the end, one is likely to spend a majority of the game being disappointed or unhappy. Meanwhile, every amount anyone wins is some amount someone else has lost. It is zero-sum. It is so zero-sum. Money is made and lost, yet no value is generated. The first night, I played for an hour and won a dollar. Yeesh, that's an hourly rate that's really not worthwhile at all. But that's all silly, because the expected hourly profit over a whole table is zero, unless you're a significantly better than average player. With rationalists, I generally assume that the people I play against are at least as skilled as myself.

(I'll take on any one of you on any instrument in Rock Band.)

So the value of the game, what is that? Some answers I've heard:
- thinking when there is money at stake
- estimating probabilities
- calibration of value estimates
- making difficult decisions
- the social aspect of manipulating other players

All good things to learn; all with a nontrivial likehood of being useful or necessary at some point in time. But then again, we would hardly be here if we couldn't do necessary but unpleasant things now, would we?

Since then, I've played somewhere between one and three other times, and some people have played many many more. It's always strange, because I'm always trying to guess at what they're thinking, and they're always trying to guess what I'm thinking, and then we base our actions on what our model of the other person is doing, including basing their action on their model of us acting, and it all spirals into a huge tangled DeathNote sort of mess. They don't trust me, nor I them. I'll admit here to having never yet bluffed a hand, but I'm fully open to bluffing hands in the future.

The element of randomness is scary. Knowing that there is a chance of things working out really well, I'm ever tempted to just play this hand and see what happens, and then I have to be all NO that's not decision with positive expected value! And it's scary, and it's tiring, but I trust that I'm learning really useful things about handling scary situations in the meanwhile, but it's not entertainment. It's like that giant game of diplomacy where we played for many hours and ended in a six-way tie, except that itself was pretty fun. And whenever I think "poker," I get a vague impression of that study done with pigeons, where rewards were given randomly and intermittently. My psychology textbook cited that even when rewards were removed, pigeons would continue to peck at a button hundreds of thousands of times, because "hope springs eternal."

Hope springs eternal. What a lovely thought, and what terrible things it makes people do! People lose vast amounts of time and money for their misplaced hopes. I think I'm just ranting on and on at this point, so it must be time to close off. I still owe you guys a completely positive post, so that will be the next thing after this, I promise. Therefore, another clip of song to sleep upon:
Please come with me,
See what I see.
Touch the stars for time will not flee.


Peace and happiness,
wobster109

Tuesday 28 June 2011

Liveblogging Men's Fashion (with a Touch of Sarcasm)

Luke: Men's fashion I'm really excited about because there's only so much I can say about fashion in general, but the particulars. . . we get to be specific [about things that look good or bad on men].

Fashion signals high status, that you get the social world, high confidence, and being sexy. . . (this part is review) Clothes need to fit and accentuate the V-shape. . . (image of a fellow who looks like a DBZ character)

Clarification: consistency is mostly important at any one given time, but consistency across days is not so important.

/begin{new material}
You can jump to the 70th percentile of men's fashion just by avoiding things that you should not do. The same holds for dating as well.

Ten Things to Avoid
1. Pleats: pleats are where the fabric folds over itself, but Luke wouldn't recommend them because any extra fabric makes you look heavier, and you want a more streamlined look. The pleats look baggier and heavier, like there are folds of skin pushing them out.
2. Hawaiian floral prints: not anywhere
3. Socks with sandals
4. Sandals: Hugh and Luke don't like men's sandals because they are strictly Pareto suboptimal, unless you're actually walking on sand.
5. Athletic shoes (except when exercising): they are not made to be good for exercising, not for looking good.
6. Mismatched belts and shoes (comment: does anyone really look that closely at a man's belt and shoes?)
7. Too-short pants: operate on the heuristic that ankles are ugly, with lots of bulges and shapes and weird stuff
8. Dirty shoes and clothes: Luke says that women pay a lot of attention to shoes, but I honestly have never noticed dirt on anyone's shoes, or even the difference between athletic shoes and other types of shoes. . . .
9. Mismatched shoes and socks (what's with all this emphasis on shoes?)
10. Polo shirts and khaki pants: Luke says they attach you to a geek schema. He says that there is no use for khakis that is not either covered by jeans or black slacks. I'd veto this if possible. Khaki pants are very convenient, with huge pockets, and I frankly find them very relaxed. They say, I value utility more than appearance. Actually, I <3 geeks. Geeks are hot.

Blogger aside: I'm feeling pretty hostile towards this. It seems more and more about looking acceptable to the standards of the general masses. Presumably, people take you seriously when you dress well. Well that's obvious. If I'm giving a presentation, of course I will look very professional. But for general everyday outings, I'd much rather have baggy, large-pocketed clothes.

I think I'm objecting to this because it's telling me not to wear things that I do like, instead wearing things that I don't like, and distinguishing shoes that I didn't even know were different before right now.

Advice for heavy men:
1. vertical, patterns, not horizontal
2. avoid pleats and bulky things
3. no large prints
4. no loud things that break the vertical line
5. lose weight

Advice for tall men (6'2" or taller):
horizontal patterns (maybe avoid vertical patterns to avoid looking too tall)

Advice for tall and skinny men:
1. horizontal patterns
2. layering to avoid looking like a starving anorexic
3. fitted shirts

Advice for short men (shorter than 6'2") (includes most men)
1. avoid baggy clothes, pleats, cuffs
2. wear low-rise pants (to make legs look longer)
3. avoid large prints or things that break the vertical line

Hair (can of worms: open)
- if you are going bald, shave your head because that is the best look. And then do something cool with your beard so there's a neat trim on it. Luke says that male hair loss is a solved problem. There's a drug called something-or-other (fenasterid?) that can be bought generically from online pharmacies under brand names. Over 5-year study, caused 2/3 of men to regrow hair, 48% to have visible hair growth, caused 92% to stop losing hair (as compared to 100% of men on placebo who continued to lose hair).

Grooming
pluck eyebrows to avoid unibrow
trim nose and ear hair
clean-shaven or neatly trimmed
clean teeth, clean skin, no body odor
subtle cologne (not too strong, because too strong scents drive some people away)

Wardrobe Essentials
- a pair of nice jeans, dark wash and minimal distressing (tears and stuff) so that they can be used kind-of formally (Aside: Luke is talking about costlier jeans generally being better than cheaper ones, as opposed to $100 t-shirts not necessarily being better than $40 ones or $30 ones. . . that's sounds pretty helluva expensive. . . it's still more expensive than any shirt I've ever had, t-shirt or no, except for really really formal professional-presentation shirts.)
- a pair of dress shoes (Ok, apparently Oxfords and loafers are different styles of shoes. Apparently, Oxfords are laced a certain way. Maybe a majority of girls really do worry about stuff like the differences between Oxfords and loafers. Do those words mean different things to you? I'm really curious how a center-of-the-fashion-bellcurve girl sees the world. But I'm also sufficiently scathing of the itteh-bitteh differences that if they really do look different, I'm inclined to think one is looking too closely at something that doesn't matter too much. Or maybe I'm wrong completely, and it actually matters a great deal. I don't know.) Consensus seems to be that the sizes of things on websites are pretty accurate, and that shoes can be ordered online.
- a pair of casual loafers (So much emphasis on shoes! I'm so bored.)
- one white button-up shirt, fitted, long sleeves, not too many pockets
- one charcoal graey suit (or black or navy) for formal wear, two buttons in front, two vents in back so it doesn't bunch up when one sits down, tailored
- one colored button-up shirt, fitted, long sleeves (burgundy is nice, but not office blue)
- one black silk tie (why silk?), thick ties for formal settings, skinnier ties for artsiness (cyberoptix.com?)
- a sweater (no sweater vests, plaid, argyle)
- some black v-neck t-shirts, fitted, for casual or layering
- underwear (not white!), no bikini briefs, no boxers, suggested boxer briefs (What do these words mean? I looked up a Google images search, still can't tell them apart D:)
- socks, matched to shoes or to pants
- one or more designer t-shirt(s), fitted, for casual or layering
- a belt that matches each shoe (wider belts are more casual)
- interesting belt buckle (some belts can detach their buckles)
- necklace, bracelets, and rings (Peter says accessories are important because you can dress up for more things with fewer articles of clothing by changing accessories)
- maybe sunglasses
- one swim suit (not speedos, not elastic), board shorts

Layering
- not heavy
- avoid similar patterns close together
- solid colors are safer because some patterns don't match very well
- can have one patterned thing with a bunch of solid-colored things

--- Intermission ---

Hugh: different strategies
red (dress normally): mainstream, higher mean, more medium responses
blue (dress distinctively): alternative, lower mean, more low responses but also more high responses

red: good for looking not offensively bad, good for making generally good impressions on groups
blue: good for dating, good for getting a few people to be really attracted to you

tradeoff: broadness of appeal to strength of appeal. Hard to appeal to everyone, because then everything is watered down. Optimum strategy depends on context.

Heuristics to find one's own style
- figure out goals, then work backwards
- find a subculture, adopt all or parts of its style (ex: hipster, punk, goth, cyber goth, steam punk, metal, industrial, rivetheads, visual kei (japanese goth), emo)
- decide what to signal, and then dress accordingly
- inspiration from the media, celebrities, bands, characters (ex: vampires)

Aside: I'm out of computer batteries now, so that's it for now. To be continued later, maybe.

Drawing

We've done three days of drawing so far, and it is all very very strange. The very first thing we drew was Jasen's face, which came out for the most part fairly well. The second thing we drew was a (real life) person from memory. That came out fairly terribly. I was trying to remember a clear image of my intended person, and it was recognizable, but whenever I tried to see a particular feature, then the whole thing just slid around and didn't make sense at all. Then we drew our hand, which was a pretty standard exercise, I guess.

We were shown an image of the profile-vase image and asked to copy down a copy of it, drawing one side just looking at the lines, and drawing the other side while thinking of facial features. We copied a drawing of Stravinsky by Picasso twice: once rightside-up and once upside-down. I guess the intention is to draw lines instead of saying, oh, that's a head, Imma ignore Jasen now and just look at my paper and draw what I think a head looks like! But I don't think any of us were doing that anyways. I certainly wasn't doing that anyways.

The hardest thing for me is getting anywhere near finished, because each drawing was to be done in fifteen minutes. It's fairly difficult to draw things with any detail in fifteen minutes, unless it's done without studying a real world object very carefully.

I believe the mouth is the hardest part of the face to draw.

On Tuesday, we drew the lines of our left hands, while twisting our body and looking at our left hands so that we couldn't see the paper, and then we didn't look at the paper at all. Things came out pretty unrecognizable. After that, we looked at our hands through a viewfinder window thing, tracing the lines with markers. We shaded a page of sketchpad, copied the outline of a hand from the viewfinder, and then filled it in very very carefully. And shaded it. Those came out very very cool.

But that was before, and now it is today. Today was strange. We drew chairs. And then we drew the negative spaces around and through the chairs. I guess the idea was to get an outline in proportion, and it was supposed to help to look at shapes (of spaces) instead of features of chairs. Finally, we drew a corner of the classroom.

Drawing is still a lot of fun, three days in. I feel like some of the things are coming out very prettily, and some other things are coming out very funnily.

However, I've heard it said (I think it was Rahul who said so? But I forget.) that the drawing is a very good metaphor for seeing the real world, and exactly that, and no more than that. We're supposed to see evidence objectively, without our cognitive biases, and that's like drawing without some preconceived notion of how a chair ought to be put together, and it only looks anywhere close to accurate if I draw exactly what I see, and not what I expect to see. That's a near-perfect metaphor for deciding what is most likely to be true.

But I'm skeptical that it actually helps to decide what is most likely to be true. It seems that seeing lines does not map directly onto seeing evidence, or seeing arguments, or seeing information. I'm very fond of it because it is a lot of fun, but I do doubt how useful it is to thinking rationally.

I feel like a big bad naysayer. Skeptical about everything! But I promise I am trying everything from as neutral a starting position as I can manage.

A side note: last night, Lincoln very nearly convinced me that we are a cult! Which is to say, it's not immediately obvious to me that we are not a cult. Interesting hypothetical apostasy here.

Monday 27 June 2011

Exercise at the Bootcamp

When I came to the Rationality Bootcamp I had resolved to make a few changes. I knew that I would actually meditate as often as I would like. I knew that I would spend time with friends who had moved out here. I knew that I would meet interesting, cool people. And I knew that I would start doing the four hour body to build muscle mass. I thought this would be hard. I would have to keep up with the routine while everyone else was playing games/getting other things done. I thought peer pressure would make this harder. I was wrong.

When I told people I was interested to do the Four Hour Body (1 boot camper) muscle-building techniques there were a people who expressed similar interest. Later on the first day when I went to get a local gym membership Cameron decided to go with me (2), he had already decided to go work out and was interested in working out and was eager to use the rigor of Tim Ferriss' method. We got a decent workout in.

The Tim Ferriss method from the Four Hour Body is based around two alternating workouts. The first is yates rows and overhead lifts. The second workout is bench press and squats. On the first of each workout you start with some weight you can lift easily and do five reps. Every time you succeed you add 10lbs or 10% (whichever is larger) and do a five rep set. Once you fail at that, you go down to 70% of the last set you completed and use that weight to failure. That's your starting weight. Every subsequent time you do the workout you add 10lbs or 10% and do reps to failure.

A couple days later we got a set of free weights. Rahul, John (our host), and Julian (5) started working out in a conventional regimen. They're working out three days a week. I assume John will post something here about their workouts if he chooses interesting.

When the free weights came Thomas, Blake and Jeremy (8) started doing the Tim Ferriss program. I walked them through the program, we looked up videos of the exercises to get the form right. Once those guys had been doing the workouts for a week, Jasen (the leader of the bootcamp) and Peter decided it was time to start. (10). Everyone's been doing a lot of work and keeping to the schedules has been a bit of a book keeping problem, but we powered through.

Last Friday we had a lecture from practitioner of Zhealth who showed us some things about joints and stretching, and KETTLEBELLS!! I had read about kettlebells in FHB and Tim Ferriss loves them. I never thought I would get a chance to try them, I never saw them at the gym. I played with them, and WOW. These things are fun. I'm not sure why, but they hold great entertainment value for me. And not only me, after playing with the kettlebell, Lincoln and Wendy decided they wanted to workout (12, all but one). The ones we saw were 16kg and I was able to do a full press with one. After some deliberation Jasen ordered a 16kg kettlebell and a 24kg kettlebell. Over the weekend I tried a 24kg kettlebell at the rock-climbing gym and couldn't get it up to my shoulder. I want to play with these things more!

So of the 12 people at the rationality bootcamp, a training center for our minds, 11 of us are doing serious physical exercise (I'm looking at you Sam). We're able to pick up the low hanging fruit that most out of shape programers and geeks don't bother with. It's awesome.

Saturday 25 June 2011

Tortuga (and Other Rants)

Hello, world! I wanted to begin by saying thank you! to John for creating this blog and inviting me to update here about the camp from time to time.

I'm wobster109, and I'm another hapless aspiring rationalist, out for a summer of terrifying adventure, profound reflection, and anime-sappy friendship. This sounds like the premise for some strange reality show. I like to make music and write code. Some nights, I stay up until the next morning just to watch the sun rise.

Last night (Thursday night) was the rationalists' meet-up in Tortuga. I hear that there are two really famous meet-ups, and they are in New York City and Tortuga. So after afternoon session, some nine of us crowded into two cars and made our way down to Mountain View.

The trouble started right away. The van's air conditioning did not turn on. Neither did the radio, nor would the windows open. We pulled over into a parking lot and checked the headlights. Those were fine, but the turn signals were out. Undaunted, we pressed on, seven of us in that one car with the air conditioner out and the windows all shut. There we were, trundling merrily down the highway with the sliding backseat doors held open. To Andrew's credit, he managed the hour-long drive without getting pulled over. We landed in Tortuga just a little bit late, settling into the room as Mr. Eliezer spoke about artificial intelligences.

He asked us, suppose that there was an artificial intelligence that was shown to be very well calibrated. Suppose it gave many 90% confidence intervals, and it turned out that it was correct nine times out of ten. You observe this many times. Suppose now that it tells you with 99.9% confidence that [something very surprising]. For instance, it might tell you that you have a tail you're programmed to not be able to see. Would you believe it?

What's the craziest thing it could get you to believe? What's the least crazy thing that it could not get you to believe?

So we counted off into eight disjoint groups, talking about crazy AIs, and then talking about each other. I met some very cool people who liked programming (why are we so dense in programmers?) and liked baking and worked to be more effective. As the night wore on, people began disappearing. I thought they were going home, until I realized that many of us, the Rationality MegaCamp people, were disappearing as well.

Suddenly, Sam bursts in through the door in a state of utter undress, saying, "I invite you all to join us in the hot tub!" Apparently, people were sitting naked in a hot tub. Apparently, that sort of thing happens here as well, but that would be a story better told by someone else.

I didn't feel quite ready to sit around naked with everyone quite yet. I was eating a jar of raspberry jam and trying hard to convince the rationalists that I'm actually an artificial intelligence. At one point, the one guy said he gave me a 99.9% chance of being human, and I felt all pleased with myself that he actually gave me a 1/1000 chance of actually being an AI. That's quite high, given the world we live in.

It was late. We piled back into the van, prepared for a long and hot ride home. This time, everything worked perfectly, which goes to show that the van just needs a reboot from time to time. We get home around 2:30. The other car gets home around 4:30.

Morning meditation was a rampaging beast today. Everything was a swirly, sleepy haze. We're told that we should experience "pleasurable sensations," but how vague is that, and how necessary that it be vague lest we get the mis-impression that it works the same way for all of us! So I'm always bothered by my own suggestibility. Each morning, I question everything I perceive, saying, is this actually being perceived, or is it a figment of my mind? (Had I been a scientist in 1904 at an N-rays demonstration, would I too have perceived the screen brighten?)

And then, the asymmetry floats into my thoughts. "Pleasurable sensations," presumably, refers to tactile sensations. Why? Why not sounds, or visions? Why are human senses not symmetric? And suddenly, I've lost count of my breaths, and I need to start over, and then an annoying song starts looping in my head, and I begin to see storylines, and suddenly I'm asleep.

Well. Subjectivity and all, I'm fairly confident that was a touch amiss. I'm 80% confident, in fact. (That's a very high degree of confidence.)

After meditation was the last session of. . . was it cognitive behavioral therapy? That's what I think it was. It was where a psychology fellow came and talked to us about something, but I honestly can't quite remember what. I really was listening, I really was! It all kind of blurred together in a haze of. . . something along the lines of thinking positively. Is that what it was? That's the entirety of my impression of it. He was very nice, and he tried very hard to engage us. Many of the speakers we've had are very nice, and they all try very hard to engage us. But then we give them our rationalists' flack, with our cries of that's not rigorously demonstrable! Your methodology is flawed! Citation needed! Yet that's exactly how I feel. People come telling us things, and it sounds familiar and unspecific, and I think to myself, this isn't making any specific predictions. Or else, they come telling us things, and it's very specific and surprising, and I think to myself, one could easily imagine to see such an effect. I have no doubt (very little doubt) that the speakers are genuine in what they say, that they truly believe what they are teaching us, and that they have a nontrivial likelihood of being right, but I'm annoyed nonetheless. Because we as people cannot distinguish the vague from the from the useful, the imagined from the real, the conjecture from the applause light, then real people living real lives suffer. People mistake stuff that sounds halfway plausible for real science, and in most cases, it's harmless, but they end up with a flawed algorithm for determining what to believe. That makes them vulnerable.

I've become spoiled by all this interacting with rationalists. Why is this so? Although we have interesting and intellectual discussions, that can't be the entire reason, because there are others with whom I have very nice conversations as well, and there are times where we simply play games or talk about nonsense. I'd guess that it has something to do with how very even-tempered everyone is, how my dear fellow MegaCampers are so unfazed about things. This sort of group is often called "non-judgemental," but Mr. Eliezer writes (to the best of my understanding) that we don't spring into indignation, and we don't launch emotional attacks on others for expressing beliefs. That's definitely true of these guys, for there is little, if anything, that they would refuse to think about. But I also feel that we tend towards being exceptionally emotionally stable. I have yet to see anyone get unreasonably upset or have unexplained moments of angst or sit around brooding over the state of things in general. They're just so reasonable about everything; I very nearly forget that there is a world out there to be dealt with.

So when Will and Divia explained "empathizing," which they use to mean understanding the other person's emotions, I was the tiniest touch skeptical. Use specific observations instead of sweeping generalizations. Ok. Cite my own emotions instead of laying blame. Fair enough. You want me to guess at their emotions? Didn't they just express their emotions? How is that going to help towards a solution? "Well," Divia patiently explained, "if you ask someone if they're worried because they aren't prepared for a presentation, they will be focused on the not-prepared instead of the freaking-out."

"But," I persisted, "if it were happening in real life, I'd say, here's piece of paper, quick write an outline." And then I realized that these would be real life people. Oh. Thomas had earlier expressed that he'd be very annoyed if someone spoke to him this way, and I imagined that I would be terribly impatient as well, but then there are lots of very commonplace things that frustrate me a great deal. Meanwhile, Thomas was looking at his handout, laughing and generating sentences such as "are you NERVOUS about the PREDATORY ANIMALS? Are you OVERWHELMED by the SEXUAL EXPRESSION?" And what's-his-face (sorry, dear visitor from Thursday! I've forgotten your name. But I do remember which high school you went to, and how you got into math in middle school!), in response to a hypothetical scenario, he generated the sentence "are you lonely because you are unloved?" We all burst into cackles. That sounded like an easy way to get a fist in the stomach.

I might be curious to try this naked hot tub truth-or-dare thing at some point in time. Lincoln said that he wouldn't ask anything or dare anything that would make one sad, but that he would also try to push at one's boundaries, so that even if they were every so slightly sad in the meantime, it would be worthwhile in retrospect. It was quite a friendly sentiment.

Phew, in retrospect, I do quite a bit of complaining. In that case, I'll go the whole of next post without saying a single unpleasant thing about anything or anyone. But only the next one though, or else it would be selectively filtered, and we can't have something as unscientific as that, can we? Anyways, to close off, here have a random snatch of a song that I'm quite fond of:

The good old days, the honest man;
The restless heart, the Promised Land
A subtle kiss that no one sees;
A broken wrist and a big trapeze. . . .

--- The Killers, "Read My Mind"


Peace and happiness,
wobster109

Thursday 23 June 2011

Week 3: IFS, NVC and CBT + 2 rationality sessions

This week was mostly self-therapy week. We have been learning about IFS and NVC. I wasn't overly interested in learning about either of these topics, and didn't pay much attention in the classes. If you want to know more, I've linked to the wikipedia pages above.

We have also been learning CBT. CBT is, insofar as I'm aware, the only talking therapy which actually has an evidence base. The basic principle is that thoughts are part of the chain that causes emotions, and that we are capable of controlling our thoughts, and thereby controlling emotions. CBT is well-tested as a treatment for depression, and it kind of feels like it should work for behaviour modification in non-depressed people - I have absolutely no idea if there's any evidence of this, and am slightly reluctant to check, for fear of destroying any useful placebo effect, although I'm sure my curiosity will get the better of me at some point.

The key tool is the Triple Column Technique, which is a fairly well-established method, which is explained better on websites dedicated to that sort of thing than I could ever do in a blog post. Basic idea, identify your common cognitive distortions, and write out rational responses to them.

There were also two "rationality" themed sessions. One on Wednesday, which was essentially a structured version of Nick Bostrom's "Write Your Hypothetical Apostasy" post from Overcoming Bias. We did a follow-up exercise, in which we tried to write our life-stories from as unflattering a point of view as possible. I did not find this exercise particularly challenging, as I don't have any particular story I tell myself of where my life is going, or how it got here. Apparently the old SingInst Summer Fellows found this exercise much more englightening than we did. I'm tempted to say that that's because they had decided to spend their summers trying to save the world, whereas we have decided to spend our summers generally having a good time with the possibility of becoming more awesome in the process... I'm sure there are other interpretations.

Finally, this afternoon there was an session with Eliezer, in which he tried to convince us that The World is Mad. Lincoln has already written a detailed summary of that exercise over at his blog, so I won't reproduce his work. However, I will emphasise one thing that stood out for me, and just about everyone else, as the biggest convincer that the world is mad: checklists. I have a half-written Less Wrong post on the sheer awesomeness of checklists, which is full of speculation as to why they have not been more widely adopted. Hopefully I will get around to posting it before the end of the summer. Summary: medical checklists could be saving thousands of lives a year, and aren't, and I, and a lot of other people who should know more about this sort of thing, have basically no no idea why. It seems likely that there are a lot of other areas in which checklists could be implemented to great effect (we actually have a few useful ones around the RBC house). More in the LW post, if and when I write it.

We also had a visit on Thursday afternoon from a local fitness trainer from Z-health. That was an interesting experience, but I will write more about it over the weekend, when I plan a post about the amount of exercise that's going on in the house... probably more than you would think.