## Mutual Information in a Causal Context

Mutual information is the idea that learning something about one variable might tell you about another. For example, learning that it's daytime might give you information about whether the sun is shining. it could still be cloudy, but you can be more sure that it's sunny than before you learned it was daytime.

Mathematically, mutual information is represented using the concept of entropy. The information gained about a variable X, assuming you learn Y, is given by:



In this case,  is a measure of the entropy. It is given by



Mutual information is supposed to be symmetric (), but I'm interested in how that works in a causal context.

Let's say you have a lightbulb that can be turned on from either of two light switches. If either lightswitch is on, then the bulb is on. Learning that one light switch is on tells you the bulb is on, but learning that the bulb is on does *not* tell you that one specific light switch is on. It tells you that at least one is on (but not which one).

Let's assume for the sake of argument that each light switch has a probability p(on) = 0.25 of being turned on (and equivalently a probability p(off) = 0.75 of being off). Assume also that they're independent.

The entropy of switch one is





Since either switch has a probability of 0.25 of being on, and they're independent, the bulb itself has a probability of 7/16 of being on.

The entropy of the bulb is





If you know switch 1's state, then the information you have about the light is given by





If instead you know the bulb's state, then the information you have about switch 1 is given by





So even in a causal case the mutual information is still symmetric.

For me the point that helps give an intuitive sense of this is that if you know S1 is on, you know the bulb is on. Symmetrically, if you know the bulb is off, you know that S1 is off.

## Ontologies of Utility Functions

In his paper on the Value Learning Problem, Nate Soares identifies the problem of ontology shift:

Consider a programmer that wants to train a system to pursue a very simple goal: produce diamond. The programmers have an atomic model of physics, and they generate training data labeled according to the number of carbon atoms covalently bound to four other carbon atoms in that training outcome. For this training data to be used, the classification algorithm needs to identify the atoms in a potential outcome considered by the system. In this toy example, we can assume that the programmers look at the structure of the initial worldmodel and hard-code a tool for identifying the atoms within. What happens, then, if the system develops a nuclear model of physics, in which the ontology of the universe now contains primitive protons, neutrons, and electrons instead of primitive atoms? The system might fail to identify any carbon atoms in the new world-model, making the system indifferent between all outcomes in the dominant hypothesis.

The programmer defined what they wanted in an ontology that their system no longer uses, so the programmer's goals are now no long relevant to what the system is actually interacting with.

To solve this problem, an artificial intelligence would have to notice when it is changing ontologies. In the story, the system knows about carbon as a logical concept, and then abandons the carbon concept when it learns about protons, neutrons, and electrons. On abandoning the concept of carbon (or any other concept), the system could re-evaluate its utility function to see if the change causes a new understanding of something within that utility function.

Intuitively, a system smart enough to say that carbon is actually made up of 6 protons could reflect the impact of such a discovery on the utility function.

A more worrying feature of an ontology shift is that it implies that an AI may be translating it's utility function into its current ontology. The translation operation is unlikely to be obvious, and may allow not just direct translation but also re-interpretation. The translated utility function may not be endorsed by the AI's original programmer.

This is true even if the utility function is something nice like "figure out what I, your creator, would do if I were smarter, then do that." The ontology that the agent uses may change, and what "your creator" and "smarter" mean may change significantly.

What we'd like to have is some guarantee that the utility function used after an ontology shift satisfies the important parts of the utility function before the shift. This is true whether the new utility function is an attempt at direct translation or a looser re-interpretation.

One idea for how to do this is to find objects in the new ontology that subjunctively depend upon the original utility function. If it can be shown that the new utility function and the old one are in some sense computing the same logical object, then it may be possible to trust the new utility function before it is put in place.

## Grudge

Way back around 500 BC, the Athenians took part in a rebellion against the Persian King Darius. When Darius learned of it, he was furious. He was apparently so worried that he wouldn't punish the Athenians that he had a servant remind him. Every evening at dinner, the servant was to interrupt him three times to say "remember the Athenians."

There are a few people in my life that I've majorly changed my mind about. For most of them, I started off liking them quite a bit. Then I learn of something terrible that they've done, or they say something very mean to me, and I stop wanting to be friends with them.

Sometimes mutal friends have tried to intervene of their behalf. "Don't hold a grudge," they tell me.

I have to imagine that when people advise you not to hold a grudge, they're imagining something like King Darius. If only Darius could stop reminding himself about the Athenian betrayal, he could forgive them and everything could go back to the way it was.

I don't have calendar reminders to keep me from forgetting what people have done. I haven't gone into my phone to delete anyone's phone number.

For me, the situation is very different. I may be consciously angry with some transgression for a while, but that emotion dissipates over the course of a few days to a few weeks. What really sticks with me is not the feeling of anger. It's the change in my model of what that person is likely to do.

When I think of spending time with someone, I have some sense of what that time would be like. If that sense seems good, then I'm excited to hang out with them. If it seems bad, then I'm not. That sense is based on a model of who that person is, and what hanging out with them will be like. It's not an explicit rehearsal of past times, good or bad.

# Models

I try to think about the people I know as being their own person. The sign to me that I know someone well is that I can predict what they'll care about, give them gifts that they find fun or useful, tell jokes or stories tailor-made for them, imagine their advice to me in a given situation.

The model that I have of someone impacts how much I choose to interact with them, and also in what ways I choose to interact with them.

I try to keep my model of a person up-to-date, since I know people change. Usually they change slowly, and I'm changing with them. Sometimes we grow closer as friends as we change.

Sometimes, I get new evidence about a person that dramatically changes my model of them. This is what it's like for me if someone surprisingly treats me poorly. I get angry, then the anger fades and all that's left is a changed model.

But there's another thing that can change my models of people.

The way I think about people's words and actions is filtered through how I think the world works. If my model for how the world works changes, then I might suddenly change how I view certain people. They haven't done anything different than usual, but it now means a very different thing to me.

# Forgiveness

When people tell me not to hold a grudge, I think that they want me to treat a person the way I treated them when I had an older model. This is impossible. I can't erase the evidence that I now have about who they are as a person.

But the thing I need to keep in mind is that I can't ever have all of the evidence necessary to know who another person is and what they'll do. If someone screams at me over something, it's very possible that they rarely yell and it was just a bad day for them. How do I incorporate that into my model?

This is where forgiveness comes in.

If someone does something that is really very bad to you, it may be the most salient feature of your model of them. The thing is, the other parts of your model of them are still valid.

Forgiveness is letting that vivid experience shrink to its proper size in your model. Depending on the event, that proper size may still be large. But by forgiving someone you give them the ability to change your model of them again. You're letting them show you that they aren't normally someone who would scream at you. You're letting them show you that they have changed since then.

Forgiveness isn't a thing that can be forced. The model that I have of a person isn't a list that I keep in my head. My model of you isn't some explicit verbal thing. It's all the memories I have of you; it's the felt sense that I get in my gut when I think of you. I can't just decide that the felt sense is different now.

Forgiveness is a slow growing thing. I can choose to help it along, to feed it with thoughts of compassion and with evidence that my model may be off-base. But regardless of what I try to do, forgiveness takes time.

# Apologies

If forgiveness is letting someone's actions influence your model of them again, then it's pretty clear that forgiveness isn't all that is necessary.

In addition to me being willing to update my model of another person, they need to be giving me evidence of who they are. They need to be giving me information to refine my model of them again.

Apologies are one way of doing this. If someone says they're sorry for something, then that's some evidence (often weak) that they actually are different than an action made them seem. The best sort of apology then, is some kind of action that really brings home the fact that the person is different. It's saying sorry, then acting in a way that prevents the transgression from happening again.

This also means that, in order for me to properly apologize to someone else, they need to actually be willing to hear me. Which is kind of a catch-22 in some ways.

I've definitely hurt some people with my words and actions in the past. There are a few people that just never want to talk with me again. That's their right, but it means that I can't properly apologize. They'll never see the ways in which I have changed, and their model of me will remain stuck on a person that I'm not anymore.

## Mathematical Foundations for Deciders

This is based on MIRI's FDT paper, available here.

You need to decide what to do in a problem, given what you know about the problem. If you have a utility function (which you should), this is mathematically equivalent to:



where  is the expected utility obtained given action . We assume that there are only finitely many available actions.

That equation basically says that you make a list of all the actions that you can take, then for each action in your list you calculate the amount of utility you expect to get from it. Then you choose the action that had the highest expected value.

So the hard part of this is actually calculating the expected value of the utility function for a given action. This is equivalent to:



That's a bit more complicated, so let's unpack it.

• The various  are the outcomes that could occur if action  is taken. We assume that there are only countably many of them.
• The  is an observation history, basically everything that we've seen about the world so far.
• The  function is the utility function, so  is the utility of outcome 
• The  function is just a probability, so  is the probability that  is the observation history and  occurs in the hypothetical scenario that  is the action taken.

This equation is saying that for every possible outcome from taking action , we calculate the probability that that outcome occurs. We then take that probability and multiply it by the value that the outcome would have. We sum those up for all the different outcomes, and that's the outcome value we expect for the given action.

So now our decision procedure basically looks like one loop inside another.

max_a = 0;
for action a that we can take:
utility(a) = 0
for outcome o that could occur:
utility(a) += p(a->o; x)*U(o)
end for
if (max_a == 0 or (utility(a) > utility(max_a)))
max_a = a
end if
end for
do action max_a

1. What is 
2. What is 

It turns out that we're going to ignore question 2. Decision theories generally assume that the utility function is given. Often, decision problems will represent things in terms of dollars, which make valuations intuitive for humans and easy for computers. Actually creating a utility function that will match what a human really values is difficult, so we'll ignore it for now.

Question 1 is where all of the interesting bits of decision theory are. There are multiple types of decision theory, and it turns out that they all differ in how they define . In other words, how does action a influence what outcomes happen?

# World models and hypothetical results

Decision theories are ways of deciding, not of valuing, what will happen. All decision theories (including causal, evidential, and functional decision theories) use the machinery described in the last section. Where they differ is in how they think the world works. How, exactly, does performing some action  change the probability of a specific outcome.

To make this more concrete, we're going to create some building blocks that will be used to create the thing we're actually interested in ().

The first building block will be: treat all decision theories as though they have a model of the world that they can use to make predictions. We'll call that model . However it's implemented, it encodes the beliefs that a decider has about the world and how it works.

The second building block extends the first: the decider has some way of interacting with their model to predict what happens if they take an action. What we care about is that in some way we can suppose that an action is taken, and a hypothetical world model is produced from . We'll call that hypothetical world model .

So  is a set of beliefs about the world, and  is a model of what the world would look like if action  were taken. Let's see how this works on a concrete decision theory.

## Evidential Decision Theory

Evidential decision theory is the simplest of the big three, mathematically. According to Eve, who is an evidential decider,  is just a conditional probability .

In words, Eve thinks as though the world has only conditional probabilities. She would pay attention only to correlations and statistics. "What is the probability that something occurs, given that I know that  has occured."

To then construct a hypothetical from this model, Eve would condition on both her observations and a given action: .

This is a nice condition, because it's pretty simple to calculate. For simple decision problems, once Eve knows what she observes and what action she takes, the result is determined. That is, if she knows  and , often the probability of a given outcome will be either extremely high or extremely low.

The difficult part of this model is that Eve would have to build up a probability distribution of the world, including Eve herself. We'll ignore that for now, and just assume that she has a probability distribution that's accurate.

The probability distribution is going to be multi-dimensional. It will have a dimension for everything that Eve knows about, though for any given problem we can constrain it to only contain relavent dimensions.

To make this concrete, let's look at Newcomb's problem (which has no observations ). We'll represent the distribution graphically by drawing boxes for each different thing that Eve knows about.

• Predisposition is Eve's own predisposition for choosing one box or two boxes.
• Accurate is how accurate Omega is at predicting Eve. In most forms of Newcomb's problem, Accurate is very close to 1.
• Prediction is the prediction that Omega makes about whether Eve will take one box or two boxes.

## Functional Decision Theory

Fiona, a functional decision theorist, has a model that is similar to Carls. Fiona's model has arrows that define how she calculates outwards from points that she acts on. However, her arrows don't represent physical causality. Instead, they represent logical dependence.

Fiona intervenes on her model by setting the value of a logical supposition: that the output of her own decision process is to do some action .

For Fiona to construct a hypothetical, she imagines that the output of her decision process is some value (maybe take two boxes), and she updates the probabilities based on what different nodes depend on decision process that she is using. We call this form of dependence "subjunctive dependence."

In this case, Fiona is not doing action . She is doing the action of deciding to do . We represent this mathematically using the same  operator that Carl had: .

It's important to note that Carl conditions on observations and actions. Fiona only conditions on the output of her decision procedure. It just so happens that her decision procedure is based on observations.

So Fiona will only take one box on Newcomb's problem, because her model of the world includes subjunctive dependence of what Omega chooses to do on her own decision process. This is true even though her decision happens after Omega's decision. When she intervenes on the output of her decision process, she then updates her probabilities in her hypothetical based on the flow of subjunctive dependence.

# Similarities between EDT, CDT, and FDT

These three different decision theories are all very similar. They will agree with each other in any situation in which all correlations between an action and other nodes are causal. In that case:

1. EDT will update all nodes, but only the causally-correlated ones will change.
2. CDT will update only the causal nodes (as always)
3. FDT will update all subjunctive nodes, but the only subjunctive dependence is causal.

Therefore, all three theories will update the same nodes.

If there are any non-causal correlations, then the decision theories will diverge. Those non-causal correlations would occur most often if the decider is playing a game against another intelligent agent.

Intuitively, we might say that Eve and Carl both mis-understand the structure of the world that we observe around us. Some events are caused by others, and that information could help Eve. Some events depend on the same logical truths as other events, and that information could help Carl. It is Fiona who (we think) most accurately models the world we see around us.

## Functional Decision Theory

This is a summary of parts of MIRI's FDT paper, available here.

A decision theory is a way of choosing actions in a given situation. There are two competing decision theories that have been investigated for decades: causal decision theory (CDT) and evidential decision theory (EDT).

CDT asks: what action would give me the best outcome?

EDT asks: which action would I be most delighted to learn that I had taken?

These theories both perform well on many problems, but on certain problems they choose actions that we might think of as poor choices.

Functional decision theory is an alternative to these two forms of decision theory that performs better on all known test problems.

# Why not CDT?

CDT works by saying: given exactly what I know now, what would give me the best outcome. The process for figuring this out would be to look at all the different actions available, and then calculate the payoffs for the different actions. Causal deciders have a model of the world that they manipulate to predict the future based on the present. Intuitively, it seems like this would perform pretty well.

Asking what would give you the better outcome in a given situation only works when dealing with situations that don't depend on your thought process. That rules out any situation that deals with other people. Anyone who's played checkers has had the experience of trying to reason out what their opponent will do to figure out what their own best action is.

Causal decision theory fails at reasoning about intelligent opponents in some spectacular ways.

## Newcomb's Problem

Newcomb's problem goes like this:

Some super-powerful agent called Omega is known to be able to predict with perfect accuracy what anyone will do in any situation. Omega confronts a causal decision theorist with the following dilemma: "Here is a large box and a small box. The small box has $1000 in it. If I have predicted that you will only take the large box, then I have put$1 million into it. If I have predicted that you will take both boxes, then I have left the large box empty."

Since Omega has already made their decision. The large box is already filled or not-filled. Nothing that the causal decision theorist can do now will change that. The causal decision theorist will therefore take both boxes, because either way that means that they get an extra $1000. But of course Omega predicts this and the large box is empty. Since causal decision theory doesn't work on some problems that a human can easily solve, there must be a better way. Evidential decision theorists will only take the large box in Newcomb's problem. They'll do this because they will think to themselves: "If I later received news that I had taken only one box, then I'll know I had received$1 million. I prefer that to the news that I took both boxes and got $1000, so I'll take only the one box." So causal decision theory can be beaten on at least some problems. # Why not EDT? Evidential decision theory works by considering the news that they have performed a certain action. Whatever news is the best news, that's what they will do. Evidential deciders don't manipulate a model of the world to calculate the best event, they simply calculate the probability of a payoff given a certain choice. This intuitively seems like it would be easy to take advantage of, and indeed it is. Evidential decision theorists can also be led astray on certain problems that a normal human will do well at. Consider the problem of an extortionist who writes a letter to Eve the evidential decider. Eve and the extortionist both heard a rumor that her house had termites. The extortionist is just as good as Omega at predicting what people will do. The extortionist found out the truth about the termites, and then sent the following letter: Dear Eve, I heard a rumor that your house might have termites. I have investigated, and I now know for certain whether your house has termites. I have sent you this letter if and only if only one of the following is true: a) Your house does not have termites, and you send me$1000.
b) Your house does have termites.

Sincerely,
The Notorious Termite Extortionist

Eve knows that it will cost more than $1000 to fix the termite problem. So when she receives the letter, she will think to herself: If I learn later that I paid the extortionist, then that would mean that my house didn't have termites. That is cheaper than the alternative, so I will pay the extortionist. The problem here is that paying the extortionist doesn't have any impact on the termites at all. That's something that Eve can't see, because she doesn't have a concrete model that she's using to predict outcomes. She's just naively computing the probability of an outcome given an action. That only works when she's not playing against an intelligent opponent. If the extortionist tried to use this strategy against a causal decision theorist, the letter would never be sent. The extortionist would find that the house didn't have termites and would predict the causal decision theorist would not pay, so the conditions of the letter are both false. A causal decision theorist would never have to worry about such a letter even arriving. # Why FDT? EDT is better in some situations, and in other situations CDT is better. This implies that you could do better than either by just choosing the right decision theory in the right context. That, in turn, implies that you could just make a completely better decision theory, which may just be MIRI's functional decision theory. Functional Decision Theory asks: what is the best thing to decide to do? The functional decider has a model of the world that they use to predict outcomes, just like the causal decider. The difference is in the way the model is used. A causal decider will model changes in the world based on what actions are made. A functional decider will model changes in the world based on what policies are used to decide. A function decision theorist would take only one box in Newcomb's problem, and they would not succumb to the termite extortionist. ## FDT and Newcomb's problem When presented with Newcomb's problem, a functional decider would make their decision based on what decision was best, not on what action was best. If they decide to take only the one box, then they know that they will be predicted to make that decision. Thus they know that the one box will be filled with$1 million.

If they decide to take both boxes, then they know they will be predicted to take both boxes. So the large box will be empty.

Since the policy of deciding to take one box does better, that is the policy that they use.

## FDT and the Termite Extortionist

Just like the causal decider, the functional decider will never get a letter from the termite extortionist. If there's ever a rumor that the functional decider's house has termites, the extortionist will investigate. If there are no termites, then the extortionist will predict what the functional decider will do upon receiving the letter:

If I decide to pay the extortion letter, then the extortionist will predict this and send me this letter. If I decide not to pay, then the extortionist will predict that I won't, and will not send me a letter. It is better to not get a letter, so I will follow the policy of deciding not to pay.

The functional decider would not pay, even if they got the letter, because paying would guarantee getting the letter.

## The differing circumstances for CDT and EDT

Newcomb's problem involves a predictor that models the agent and determines the outcome.

The termite extortionist involves a predictor that models the agent, but imposes a cost that's based on something that the agent cannot control (the termites).

The difference between these two types of problems is called subjunctive dependence.

Causal dependence between A and B: A causes B

Subjunctive dependence between A and B: A and B are computing the same function

FDT is to subjunctive dependence as CDT is to causal dependence.

A Causal Decider makes decisions by assuming that, if their decision changes, anything that can be caused by that decision could change.

A Functional Decider makes decisions by assuming that, if the function they use to choose an action changes, anything else that depends on that function could change (including things that happened in the past). The functional decider doesn't actually believe that their decision changes the past. They do think that the way they decide provides evidence for what past events actually happened if those past events were computing functions that the functional decider is computing in their decision procedure.

# Do you support yourself?

One final recommendation for functional decision theory is that it endorses its own use. A functional decider will make the same decision, regardless of when they are asked to make it.

Consider a person trapped in a desert. They're dying of thirst, and think that they are saved when a car drives by. The car rolls to a stop, and the driver says "I'll give you a ride into town for $1000." Regardless of if the person is a causal, evidential, or functional decider, they will pay the$1000 if they have it.

But now imagine that they don't have any money on them.

"Ok," says the driver, "then I'll take you to an ATM in town and you can give me the money when we get there. Also, my name is Omega and I can completely predict what you will do."

If the stranded desert-goer is a causal decider, then when they get to town they will see the problem this way:

I am already in town. If I pay $1000, then I have lost money and am still in town. If I pay nothing, then I have lost nothing and am still in town. I won't pay. The driver knows that they will be cheated, and so drives off without the thirsty causal decider. If the desert-goer is an evidential decider, then once in town they'll see things this way: I am already in town. If I later received news that I had paid, then I would know I had lost money. If I received news that I hadn't paid, then I would know that I had saved money. Therefore I won't pay. The driver, knowing they're about to be cheated, drives off without the evidential decider. If the desert goer is a functional decider, then once in town they'll see things this way: If I decide to pay, I'll be predicted to have decided to pay, and I will be in town and out$1000. If I decide not to pay, then I'll be predicted to not pay, and I will be still in the desert. Therefore I will decide to pay.

So the driver takes them into town and they pay up.

The problem is that causal and evidential deciders can't step out of their own algorithm enough to see that they'd prefer to pay. If you give them the explicit option to pay up-front, they would take it.

Of course, functional deciders also can't step out of their algorithm. Their algorithm is just better.

## The Deciders

This is based on MIRI's FDT paper, available here

Eve, Carl, and Fiona are all about to have a very strange few days. They don't know each other, or even live in the same city, but they're about to have similar adventures.

# Eve

Eve heads to work at the usual time. As she walks down her front steps, her neighbor calls out to her.

"I heard a rumor that your house has termites," says the neighbor.

My dear reader: you and I know that Eve's house doesn't have termites, but she doesn't know that.

"I'll have to look into it," responds Eve, "but right now I'm late for work." And she hurries off.

As she's walking to work, Eve happens to meet a shadowy stranger on the street. That shadowy stranger is carrying a large box and a small box, which are soon placed on the ground.

"Inside the small box is $1000," says the stranger. "Inside the big box, there may be$1 million, or there may be nothing. I have made a perfect prediction about what you're about to do, but I won't tell you. If I have predicted you will take only the big box, it will have $1 million in it. If I have predicted that you will take both boxes, then I left the big box empty. You can do what you want." Then the stranger walks off, ignoring Eve's questions. Eve considers the boxes. The mysterious stranger seemed trustworthy, so she believes everything that she was told. Eve thinks to herself: if I was told later that I took only the big box, then I'd know I'd have$1 million. If I were told I had taken both boxes, then I'd know that I only had $1000. So I'd prefer to have only taken the big box. She takes the big box. When she gets to work, she opens it to find that it is indeed full of ten thousand hundred dollar bills. She is now a millionaire. Eve goes straight to the bank to deposit the money. Then she returns home, where she has a strange letter. The letter is from the notorious termite extortionist. The termite extortionist has been in the news a few times recently, so Eve knows that the villain is for real. The letter reads: Dear Eve, I heard a rumor that your house might have termites. I have investigated, and I now know for certain whether your house has termites. I have sent you this letter if and only if only one of the following is true: a) Your house does not have termites, and you send me$1000.
b) Your house does have termites.

Sincerely,
The Notorious Termite Extortionist

If her house has termites, it will take much more than $1000 to fix. Eve thinks about the situation. If she were to find out later that she had paid the extortionist, then that would mean that her house did not have termites. She prefers that to finding out that she hadn't paid the extortionist and had to fix her house. Eve sends the Extortionist the money that was asked for. When she checks her house, she finds that it doesn't have termites, and is pleased. Eve decides to take the bus to work the next day. She's so distracted thinking about everything that's happened recently that she gets on the wrong bus. Before she knows it, she's been dropped off in the great Parfit Desert. The Parfit Desert is a terrible wasteland, and there won't be another bus coming along for over a week. Eve curses her carelessness. She can't even call for help, because there's no cell signal. Eve spends two days there before a taxi comes by. By this point, she is dying of thirst. It seemed she would do anything to get out of the desert, which is what she says to the taxi driver. "It's a thousand dollars for a ride into town," says the Taxi driver. "I left my money at home, but I'll pay you when we get there," says Eve. The taxi driver considers this. It turns out that the taxi driver is a perfect predictor, just like the mysterious stranger and the termite extortionist. The taxi driver considers Eve. The driver won't be able to compel her to pay once they're in town. And when they get to town, Eve will think to herself: If I later found out that I'd paid the driver, then I'd have lost$1000. And if I later found out that I hadn't paid the driver, then I'd have lost no money. I'd rather not pay the driver.

The taxi driver knows that Eve won't pay, so the driver goes off without her. Eve dies of thirst in the desert.

Eve has $999,000, her house does not have termites, and she is dead. # Carl As he heads to work, Carl's neighbor mentions a rumor about termites in Carl's house. Carl, also late for work, hurries on. A mysterious stranger approaches him, and offers him two boxes. The larger box, Carl understands, will only have$1 million in it if the stranger predicts that Carl will leave the smaller box behind.

As Carl considers his options, he knows that the stranger has either already put the money in the box, or not. If Carl takes the small box, then he'll have an extra $1000 either way. So he takes both boxes. When he looks inside them, he finds that the larger box is empty. Carl grumbles about this for the rest of the day. When he gets home he finds that he has no mail. Now dear reader, let's consider the notorious termite extortioner. The termite extortioner had learned that Carl's house might have termites. Just as with Eve's house, the extortioner investigated and found that the house did not, in fact, have termites. The extortioner considered Carl, and knew that if Carl received a letter he wouldn't pay. The extortioner knew this because he knew that Carl would say "Either I have termites or not, but paying won't change that now". So the extortioner doesn't bother to waste a stamp sending the letter. So there is Carl, with no mail to occupy his afternoon. He decides to catch a bus downtown to see a movie. Unfortunately, he gets on the wrong bus and gets off in the Parfit Desert. When he realizes that the next bus won't come for another week, he curses his luck and starts walking. Two days later, he's on the edge of death from dehydration. A taxi, the first car he's seen since he got off the bus, pulls up to him. "It's a thousand dollars for a ride into town," says the Taxi driver. "I left my money at home, but I'll pay you when we get there," says Carl. The taxi driver considers Carl. The driver won't be able to compell him to pay once they're in town. And when they get to town, Carl will think to himself: Now that I'm in town, paying the driver doesn't change anything for me. Either I give the driver$1000, or I save the money for myself.

The taxi driver knows that Carl won't pay when the time comes to do it, so the driver goes off without him. Carl dies of thirst in the desert.

Carl has $1000, his house does not have termites, and he is dead. # Fiona As Fiona leaves home for work, her neighbor says to her "I heard a rumor that your house has termites." "I'll have to look into that," Fiona replies before walking down the street. Partway to work, a mysterious stranger confronts her. "Yes, yes, I know all about your perfect predictions and how you decide what's in the big box," says Fiona as the stranger places a large box and a small box in front of her. The stranger slinks off, dejected at not being about give the trademarked speech. Fiona considers the boxes. If I'm the kind of person who decides to only take the one large box, then the stranger will have predicted that and put$1 million in it. If I'm the kind of person that decides to take both boxes, the stranger would have predicted that and left the big box empty. I'd rather be the kind of person that the stranger predicts as deciding to take only one box, so I'll decide to take one box.

Fiona takes her one large box straight to the bank, and is unsurprised to find that it contains $1 million. She deposits her money, then goes to work. When she gets home, she finds that she has no mail. Dear reader, consider with me why the termite extortionist didn't send a letter to Fiona. When the termite extortionist learned of the rumor about Fiona's house, the resulting investigation revealed that there were no termites. The extortionist would predict Fiona's response being this: If I'm the kind of person who would decide to send money to the extortionist, then the extortionist would know this about me and send me an extortion letter. If I were the kind of person who decided not to give money to the extortionist, then the extortionist wouldn't send me a letter. Either way, the cost due to termites is the same. So I'd prefer to decide not to pay the extortionist. The extortionist knows that Fiona won't pay, so the letter is never sent. Fiona also decides to see a movie. In a fit of distraction, she takes the wrong bus and ends up in the Parfit Desert. When she realizes that the next bus won't be along for a week, she starts walking. Two days later, Fiona is on the edge of death when a taxi pulls up. "Please, how much to get back to the city? I can't pay now, but I'll pay once you get me back," says Fiona. "It's$1000," says the taxi driver.

The taxi driver considers Fiona's decision-making process.

When Fiona is safely in the city and deciding whether to pay the taxi driver, she'll think to herself: If I were the kind of person who decided to pay the driver, then the driver would know that and take me here. If were the kind of person who decided not to pay the driver, then the driver wouldn't give me a ride. I'd rather be the kind of person who decided to pay the driver.

The taxi driver takes Fiona back to the city, and she pays him.

Fiona has $999,000, her house doesn't have termites, and she is alive. Dear reader, the one question I want to ask you is: who is spreading all those rumors about termites? ## The Weakness of Rules of Thumb There's a common issue that comes up when I'm teaching people how to design electronics: people new to designing electronics often feel like they need to obey all the rules of thumb. This came up recently when somebody I was teaching wanted to make sure none of her PCB's electrical traces had 90 degree bends in them. There was on particular point on her board that couldn't be made to fit that rule. When she realized that she'd have to put a 90 degree bend in her trace, her question to me was if that was "valid and legal". I probed her understanding a bit, and it seemed she was mostly thinking about the design guidelines as though they were laws, and she didn't want her design to break any laws. This kind of thinking is pretty common, but I think it actively prevents people from designing electronics effectively. People with this mindset focus too much on whether their design meets some set of rules, and not enough on whether their design will actually work. # Where Design Guidelines Come From Electronics is a domain that has a lot of rules of thumb. There are some pretty complicated physics behind how electrons act on a circuit board, and you can often simplify an engineering problem to a simple rule. After a few decades of the industry doing this, there's a large collection of rules. New engineers sometimes learn the rules before learning the physical principles that drive them, and then don't know when the rules don't apply. For example, the advice to not put 90 degree bends in electrical traces is due to the fact that sharp bends increase the reactive impedance of a trace. For high frequency traces, this can distort the electrical signal flowing down the trace. For low frequency or DC signals, 90 degree bends are much less of an issue. Ultimately, any rule of thumb rests on a concrete foundation of "if this condition holds, then that result will be produced in a specific manner". If you know the detailed model that drives the simple rule, you know when to ignore the rule. # Obeying The Rules or Designing a Working Project The mindset that I sometimes see in new students is the idea that they need to follow all the rules. This makes sense if you assume that following all the rules will automatically lead to a working project. Unfortunately, the rules of thumb in electronics over-constrain a circuit board. Electrical engineers will often face the prospect of a design guideline that can't be satisfied. The most effective response to a rule of thumb that can't be satisfied seems to be to ask about the physics behind the rule. Then the engineer can figure out how change the design to match the physical laws behind the general guideline. The design guidelines were made for the people, not the people for the design guidelines. Don't ask "how can I make this design satisfy all the design guidelines?" Instead ask "how can I make this design work?" ## Corrigibility This post summarizes my understanding of the MIRI Corrigibility paper, available here. If you have a super powerful robot, you want to be sure it's on your side. The problem is, it's pretty hard to specify what it even means to be on your side. I know that I've asked other people to do things for me, and the more complicated the task is the more likely it is to be done in a way I didn't intend. That's fine if you're just talking about decorating for a party, but it can cause big problems if you're talking about matters of life or death. # Overrides Since it's hard to specify what your side actually is, it might make sense to just include an override in your super powerful robot. That way if it starts mis-behaving, you can just shut it down. So let's say that you have an emergency stop button. It's big and red and easy to push when things go south. What exactly happens when that button gets pushed. Maybe the button cuts power to the computer that runs your robot. The problem with that is that your robot may have set up a bunch of sub-agents online, and a simple power switch wouldn't effect them. No, that e-stop button needs to have some pretty complex logic behind it to actually stop things. Maybe the button puts your robot into a different mode. Before you press the button, the robot is trying to make you a sandwich. After you press the button, the robot is trying to shut down the peanut butter spreader without dropping a knife on anyone's foot. The problem there is that if your robot is complex enough to persue a goal, then it may try to prevent you from stopping it. Right now, most robots just execute specific pre-programmed actions. A super-powerful robot will need to generate new actions to achieve difficult goals. If a robot's goal is to make a sandwich, and it knows it won't be able to if you press the button, then step one of sandwich making may be to kill you so you don't press the button. So the original goal mode, before the button is pressed, needs to incorporate an equanimity with the button being pushed. That's what it means for an robot to be corrigible in the sense that MIRI means. # Encouraging Equanimity You can say that the thing your robot wants is represented by its value function: some mathematical representation of its desires (and those desires may not be recognizable as such to a human). The robot will do whatever maximizes the value function. You want to make that value function be ok with the e-stop button being pressed. The way that the MIRI paper recommends doing this is to combine two different value functions. One of those value functions is where the robot does what you told it to. The other value function is where the robot shuts down nicely. Combining those value functions is complicated, since the first one needs to have equanimity with the second. ## Simple Combination of Value Functions You might try to combine the value functions as simply as possible: if the button isn't pressed, value is just the normal value function. If the button is pressed, then the value is 100 if the robot is shutting down and 0 otherwise. The robot may do a bunch of different things over time, and the value it expects to get may change a lot after each action. Maybe in the course of making a sandwich it drops the bread, so the sandwich will be gross and provide low value. Or maybe it discovers that adding heroin to the sandwich makes people love it so much more, thus increasing the value of the sandwich. Whatever the actual e-stop value function is, the normal value function could be much worse (maybe -10) or much better (maybe 200) depending on what actions the agent takes. This gives the agent a reason to try and manage whether the button gets pressed. This obviously doesn't help us. ## Perfect Indifference Since different actions have an impact on the normal value function, you may want them to have an effect on the e-stop value function as well. Think about it this way: Let's say you have a lottery ticket, and the winner hasn't been announced yet. If your friend offers you$20 for the ticket and you paid $1 for it, then that seems like a pretty good deal. But what if you check online, and find out that your ticket is the winner. Now your friend would have to offer you much more than$20 to make you give up the ticket.

By making the e-stop value function dependent on what the robot already knows, you can make sure that it's always willing to obey the button.

The MIRI paper describes this as a combination of bonus and penalty. If the e-stop would ordinarily make the robot lose value, then the e-stop value function gives it a bonus equal to what it might have lost. If the e-stop would have made the robot win, then the e-stop value function gives it a penalty equal to what it may have lost in the normal mode.

This makes the robot perfectly indifferent to whether the button is pressed, but it does have some surprising impacts.

## Problems with Perfect Indifference

The perfect indifference between e-stop and normal valuations is a problem. It means that there's nothing that the robot would ever do to preserve the e-stop button from destruction. You, the robot creator, value the e-stop button. The robot does not. And if it could pay a tiny cost to prevent the e-stop button from being disconnected by an errant roomba, it would not. It would not even let you know that the event had occurred.

So if something goes seriously wrong with your robot, you don't have any gaurantee that the e-stop button will still work when you press it.

# The State of the Art

The description above is pretty much the state of the art in super-powerful e-stop buttons. The MIRI paper argues that knowing how to combine utility functions in this way is crucial to being able to depend on super-intelligent AIs. They point to a few different things that may help, but don't have a solution yet.

# Things to look into later

## Complex Shutdown Procedures

What if you want it to do something other than shut down? You can write your e-stop utility function to be anything you want. The more complicated it gets, the more you might want an e-stop for your e-stop.

## Equanimity or Begrudging Acceptance

It doesn't make sense to me that you'd want your robot to be equally ok with the button being pressed or not pressed. In that case, why would it not just flip a coin and press the button itself if the coin comes up heads? To me it makes more sense that it does want the button to be pressed, but all the costs of actually causing it be pressed are higher than the benefit the robot gets from it. In this case the robot may be willing to pay small costs to preserve the existence of the button.

Depending on how the expected values of actions are computed, you could attach an ad-hoc module to the robot that automatically makes the cost of pressing the button slightly higher than the benefit of doing so. This ad-hoc module would be unlikely to be preserved in sub-agents, though.

## Costs the Robot Maker Can Pay

Some of the assumptions behind the combined value function approach is that the normal value function is untouched by the addition of the e-stop value function.

You want your robot to make you a sandwich, and adding an e-stop button shouldn't change that.

But I'm perfectly ok with the robot taking an extra two minutes to make my sandwich safely. And I'm ok with it taking food out of the fridge in an efficient order. And I'm ok with it using 10% more electricity to do it.

There are a number of inefficiencies that I, as a robot builder, am willing to put up with to have a safe robot. It seems like there should be some way to represent that as a change to the normal value function, allowing better behavior of the robot.

## Don't be afraid to move down the ladder of abstraction

In programming, there's an idea that's often called the ladder of abstraction. When you approach a problem, you can understand small bits of it and then put those together into larger pieces. By thinking about the problem with these larger pieces, you can get a better idea of what's going on.

A piece of advice that's often given is to move up the ladder of abstraction. Build a tool or function that does a low level thing, then just use that instead of looking at the lower level again. When you're starting from scratch on a project. This is a great idea. Using the ladder of abstraction allows you to quickly build things that work well, without having to keep solving the same problems over and over again.

However, there are times that it makes total sense to move down the ladder of abstraction, and look at what's going on as concretely as possible. This is especially true if you're debugging, and trying to fix something that's broken. Higher levels of abstraction obscure what's actually happening, which makes it difficult to isolate a problem so it can be fixed.

That's not to say that bugs should always be hunted in the weeds. Moving up the ladder of abstraction can help you to find out which particular component of a larger system is the source of the problem. Once that's been determined, you'll have to be more concrete with that component in order to solve the problem.

I also think this kind of model is good for solving more than programming problems. I've successfully used the idea of changing levels of abstraction to solve software bugs, fix hardware errors, and figure out how to deal with socially difficult situations. I would expect the idea to also work well in the softer sciences, like politics, but it seems like people often get stuck in at one level in those areas.

I sometimes have conversations with people about political systems that have clear problems, like how global warming is dealt with in the US. Sometimes, the solution proposed is systemic change. When I ask what that means, answers are often given at the level of the entire political system, rather than what specific people or groups should do. While I agree that "the system" needs to change, I think trying to change the system as a whole is ineffective. It would be much more effective to move down a level of abstraction and suggest who do what differently. Once that's done, the system is different. Systemic change has happened, but at a level that is easier to impact.

I think that some aspects of political discontent stem from being stuck at one level of abstraction. If you think that global warming or poverty needs to be solved at the level of the US government, then that's a huge problem and how are you going to do anything. It's easy to get overwhelmed that way. On the other hand, if you think of those problems as being generated by smaller sub-components, then you have places to look for actions that are achievable.

I don't have the answers for those large systemic political issues right now, but I do think that this idea from software can be of help. By being willing to move to more concrete understandings, we can solve problems that seem intractable.

## Red Spirits

There's a story I heard from a friend at a recent Rationality meetup. It goes like this:

When Europeans were colonizing Africa, they told some Africans that they had to move their city. Their city was on a plains, and Europeans wanted a nice city like those at home: on a river. The Africans objected, saying that they couldn't live near the river. That's where the red spirits were, and people would suffer if they lived there. The Europeans made them do it anyway, because red spirits clearly don't exist. And then everyone got Malaria.

I think there are two thing going on here:

1. The colonizers were basically assuming that the moral of a fairy tale wasn't useful because the fairy tale wasn't true.

2. The Europeans were ignoring a story because it didn't fit in with the terminology that they already used to describe the world.

# Fairy Tales With Morals

The colonizers assumed that, because the justification for a custom was contradicted by scientific understanding, the custom wasn't valuable. Red-spirits don't exist, so there's no reason to follow the custom.

The issue with this is that culture is subject to evolutionary pressure in the same way as genes. Cultures that lead to their adherents prospering are more likely to be present in the future, so any currently existing cultural artifact should be assumed to have served some important purpose in the past. That purpose may not be clear, or it may not be one that you agree with in a moral sense, or it may not apply in the present, but the purpose almost certainly existed.

This is basically a Chesterton's Fence argument at the cultural level. If the colonizers hadn't assumed that something they couldn't see a reason for had no reason, many people's lives could have been saved.

# Science Stories

The terminology mistake is, in my opinion, even more dire. The colonizers argued that red spirits didn't exist, so people should move to the river. My friend who told me this story argued that the native villagers were mistaken for believing in red spirits, and that they should instead have believed in mosquitos.

The problem is that it isn't clear from the story that there's any difference between believing in mosquitos and believing in red spirits. Maybe red-spirits just means mosquitos. Or maybe it means malaria. The story doesn't have enough information to tell if the villagers were actually wrong about anything. When I brought this up, my friend couldn't answer any questions about what believing in red spirits actually meant to the villagers.

This is a failure mode that I think is common to people who describe themselves as scientists. I've noticed that people who describe observations in a way that doesn't use standard scientific jargon are often dismissed by people who are super into science. That happens even more if the description given uses words often used by marginalized sub-cultures.

People may be describing the exact same observations, and using the same model to describe those observations, but argue because they're using different terminology. It seems important to actually try to understand the model people have in their heads, and try to avoid quibbles about how they describe that model as much as possible.

There's another level to this if you assume that red spirits actually means ghosts in the western sense. Science-afficianados like to talk about the value of testability, but both mosquito and ghost models are testable. If you think that mosquitos carry tiny cells that can reproduce in your body and make you sick, that implies certain things you can do to prevent disease. If you think that ghosts get angry if you live in a certain area and make you sick, that implies other methods of prevention. People can try these prevention methods and see what works; they can test their theories. Just going in and saying that ghosts don't exist totally ignores any tests that the villagers actually did before you got there.

# The Use of Red Spirits

Even assuming that red spirits literally meant believing in ghosts, that idea was saving lives at the time that the colonizers moved in. It seems like there are a lot of fairy tales like this: explanations whose constituent parts don't correspond with things in the real world, but that still accurately predict patterns in the real world.

I think that this is the source of a lot of cultural relativism and post-modernism. If someone thinks only of the outcome of explanations, then the actual truth value of the component parts of the explanation don't matter. All explanations are as valid as they are useful to their culture. Since cultural evolution strongly implies all stories and explanations serve some useful purpose, every story a culture tells is useful. Therefore all explanations are true.

The only mistake that I see with that is the idea that a useful fairy tale implies that each component of the fairy tale is useful.

Having an explanation whose component parts each correspond to something that can be observed in the real world is useful on its own. If you have such a model, you can mentally vary different aspects of it and predict the outcome. It's easier to use subjunctive reasoning on a model with true parts than a model that only useful when taken as a whole. You can even take small sections of the model and apply them in other circumstances.

Thinking that mosquitos cause malaria implies that you should avoid mosquitos, which (as we now know) can actually prevent you from getting sick. Thinking ghosts cause malaria might be useful if you end up avoiding mosquitos while also avoiding ghosts. Given that avoiding ghosts leads to avoiding mosquitos, the main reason to prefer one of these over the other is if one is less onerous.

Beliefs and stories rent out a share of your brain by being useful to you. As the landlord of your brain, it seems like the best thing to do is get beliefs that will pay you a lot in usefulness while requiring little mental real estate for themselves. Believing in and acting on the mosquito-malaria connection takes a certain amount of mental effort. I'm not sure what a belief in a ghost-malaria connection that actually led to avoiding malaria would entail, but I can guess that it would be more mentally costly than the mosquito-malaria alternative.