Pursuit of Happiness

Life, liberty, and the pursuit of happiness. When I first learned about the US constitution, I thought the pursuit of happiness was an odd choice there. What did that have to do with government. Certainly the government shouldn't kill people, and certainly it shouldn't deprive them of freedom, but the pursuit of happiness is an internal thing. How could a government have anything to do with that?

I've been reading a history of peri-enlightenment France called "Passionate Minds" recently. It argues that the pursuit of happiness is actually the most subversive of the three unalienable rights. Turns out that monarchies often take their power as a divine gift. In that case, common people are spiritually bound to work for the monarch. Working for yourself is just an affront to god.

Many people in Christendom seem to have viewed life as a suffer-fest that they worked at so that they could get to heaven. Even if they thought they could improve their life, it wouldn't have seemed acceptable to try. Making the pursuit of happiness a right is directly contradicting much church doctrine of the time.

Christmas Spirit

Christmas is a weird time for me.

When I was a kid, Christmas was the time that I had to be very careful to not show either of my (divorced) parents more favor than the other. There was a lot of careful planning among my whole family to make sure that Christmas was evenly divided. I had to be sure to play with all my toys in front of the people who gave them to me. I had to be sure to spend an equal amounts of time with each parent. I had to be sure I told everyone that I loved them. My feeling around Christmas was one of brittleness, of walking on eggshells. All of my Christmas traditions were things I did to show that I cared about someone. They were mostly about display.

My wife had a very different Christmas experience growing up, and it's been a bit of a trip getting used to it. Christmas for her is a series of traditions done out of fun and joy. The strange thing to me is that her traditions are mostly the same things, but they feel very different when I do them with her and her family.

I'm realizing that my walking-on-eggshell feeling at childhood Christmases was mainly an internal thing. As I let go of the need to manage other peoples' feelings, Christmas gets more fun even with my family. This year I even enjoyed my own family Christmas traditions when visiting my mom. They weren't a thing I had to be sure to do right, at risk of hurting a loved one. They were a thing that we could all just enjoy doing together.

I am taking more of a risk that I offend someone, but I'm also feeling less like that's the most important thing. If someone gets offended or upset, that now seems like a chance to talk about real feelings and figure things out. As a kid I felt that any problem was world-ending, and I'm now realizing that most of my most feared interpersonal problems are recoverable.

The best thing for me about this new way of doing Christmas has been a deep sense of being at home. Sitting together and looking at a Christmas tree, or at the snow outside, took on a deeper sense of meaning than it ever has for me. I had a sense of being connected, not just to my family, but to all of the people throughout history who have looked with joy at new fallen snow. I had a sense of my own place in the world, which I'm not sure I'd noticed I'd never had.

Today, after the torn bits of paper and string had been cleaned away, I sat and looked out at the snow with my wife and had a great internal feeling of peace. A true feeling of Christmas spirit.

What's funny is how much that Christmas spirit worried me. As soon as I started noticing myself feeling like things were fine, I started worrying that I'd lose all motivation. If I'm happy and at peace, why work to make the world a better place?

I think my worry points to something important about the reasons that I do non-Christmas things. Childhood Christmases were all about making sure I did things to let people know I cared about them, not actually about enjoying the day or actually even caring about them. Perhaps many of my motivations for the rest of my life are based on similar foundations.

I'd like to enjoy my life and have a deep sense of meaning from everything I do. I'd also like to make people's lives better and do as much as I can to fix the world's problems. On the surface these things don't seem incompatible. This is something I'll be exploring in the year to come.

Maybe next Christmas I won't be so surprised, or worried, when I feel the Christmas-spirit coming on.

In praise of Ad Hominems

Ad hominems get a bad rap.

Specifically, there are instances where knowing that the person who thought up an idea has certain flaws is very useful in evaluating the idea.

In the best case scenario, I can evaluate every argument I hear on its own merits. Unfortunately, I'm often too busy to put enough time into every argument that I hear. I might just read enough of an argument to get the gist, and then move on to the next thing I'm interested in. This has bitten me a few times.

If I know that the author of an article is intellectually sloppy, that actually helps me quite a bit when it comes to evaluating their arguments. I'll put more time into an article they've written, because I now feel that its more important to evaluate it for myself.

If I know more specifically that an author doesn't understand supply and demand (or whatever), then that tells me exactly what parts of their argument to hone in on for more verification.

The general case of just dismissing an argument because the person making it has some flaw does still seem bad to me. It makes sense to know what kind of person is giving the argument, because that can point you at places that the argument may be weakest. This allows you to verify more quickly whether you think the argument itself is right.

Ad hominems shouldn't end an argument, but they can be a useful argument direction-finder.

Seeing problems coming

I've written a lot about agent models recently. The standard expectation maximization method of modeling agents seems like it's subject to several weaknesses, but there also seem to be straightforward approaches to dealing with those weaknesses.

1. to prevent wireheading, the agent needs to understand its own values well enough to predict changes in them.
2. to avoid creating an incorrigible agent, the agent needs to be able to ascribe value to its own intentions.
3. to prevent holodeck addiction, an agent needs to understand how its own perceptions work, and predict observations as well as outcomes
4. to prevent an agent from going insane, the agent must validate its own world-model (as a function of the world-state) before each use

The fundamental idea in all of these problems is that you can't avoid a problem that you can't see coming. Humans use this concept all the time. Many people feel uncomfortable with the idea of wireheading and insanity. This discomfort leads people to take actions to avoid those outcomes. I argue that we can create artificial agents that use similar techniques.

The posts linked above showed some simple architecture changes to expectation maximization and utility function combinations. The proposed changes mostly depend on one tool that I left unexplored: representing the agent in its own model. The agent needs to be able to reason about how changes to the world will affect its own operation. The more fine-grained this reasoning can be, the more the agent can avoid the above problems.

Some requirements of the world-model of the agent are:

  • must include a model of the agent's values
  • must include all parts of the world that we care about
  •  must include the agent's own sensors and sense methods
  • must include the agent's own thought processes

This is a topic that I'm not sure how to think about yet. My learning focus for the next while is going to shift to how models are learned (e.g. through reinforcement learning) and how agent self-reflection is currently modeled.

Agent Insanity

The wireheading and holodeck problems both present ways an agent can intervene on itself to get high utility without actually fulfilling its utility function.

In wireheading, the agent adapts its utility function directly so that it returns high values. In the holodeck problem, the agent manipulates its own senses so that it thinks it's in a high value state. Another way that an agent can intervene on itself is to manipulate its model of the world, so that it incorrectly predicts high valued states even given valid observations. I'll refer to this type of intervention as inducing insanity.

Referring again to the decision theoretic model, agents predict various outcomes for various actions, and then evaluate how much utility they get for an action. This is represented symbolically as p(state-s, a -> o; x)*Utility(o). The agent iterates through this process for various options of action and outcome, looking for the best decision.

Insanity occurs whenever the agent attempts to manipulate its model of the world, p(state-s, a -> o; x), in a way that is not endorsed by the evidence the agent has. We of course want the agent to change its model as it makes new observations of the world; that's called learning. We don't want the agent to change its model just so it can then have a high reward.

Insanity through recursive ignorance

Consider an agent that has a certain model of the world being faced with a decision whose result may make its model become insane. Much like the wireheading problem, the agent simulates its own actions recursively to evaluate the expected utility of a given action. In that simulation of actions, one of those actions will be the one that degrades the agent's model.

If the agent is unable to represent this fact in its own simulation, then it will not be able to account for it. The agent will continue to make predictions about its actions and their outcomes under the assumption that the insanity-inducing act has not compromised it. Therefore the agent will not be able to avoid degrading its prediction ability, because it won't notice it happening.

So when recursing to determine the best action, the recursion has to adequately account for changes to the agent's model. Symbolically, we want to use p'(state-s, a -> o; x) to predict outcomes, where p' may change at each level of the recursion.

Predicting your decision procedure isn't enough

Mirroring the argument in wireheading, just using an accurate simulated model of the agent at each step in the decision recursion will not save the agent from insanity. If the agent is predicting changes to its model and then using changed models uncritically, that may only make the problem worse.

The decision theory algorithm assumes that the world-model the agent has is accurate and trustworthy. We'll need to adapt the algorithm to account for world-models that may be untrustworthy.

The thing that makes this difficult is that we don't want to limit changes to the world-model too much. In some sense, changing the world-model is the way that the agent improves. We even want to allow major changes to the world-model, like perhaps switching from a neural network architecture to something totally different.

Given that we're allowing major changes to the world-model, we want to be able to trust that those changes are still useful. Once we predict a change to a model, how can we validate the proposed model?

Model Validation

One answer may be to borrow from the machine learning toolbox. When a neural network learns, it is tested on data that it hasn't been trained on. This dataset, often called a validation set, tests that the network performs well and helps to avoid some common machine learning problems (such as overfitting).

To bring this into the agent model question, we could use the observations that the agent has made to validate the model. We would expect the model to support the actual observations that the agent has made. If a model change is predicted, we could run the proposed model on past observations to see how it does. It may also be desirable to hold out certain observations from the ones generally used for deciding on actions, in order to better validate the model itself.

In the agent model formalism, this might look like:

function decide(state-s):
  max_a = 0
  for a in available actions:
    utility(a) = 0
    for outcome o in possible outcomes:
      if not valid_model(state-s, x):
        utility(a) += Utility(insanity)
      else:
        utility(a) += p(state-s, a -> o; x)*Utility(o)
    end for
    if (max_a == 0 or (utility(a) > utility(max_a)))
      max_a = a
    end if
  end for
  return action max_a

function transition(old_state, action_a):
  return new_state obtained by taking action_a in old_state;

function Utility(test_state):
  if test_state == insanity:
    return value(insanity) // some low value
	
  current_value = value(test_state)
  future_value = value(transition(test_state, decide(test_state)))
  return (current_value + future_value)

In this formalism, we check to see if the model is sane each time before we use it. The valid_model function determines if the model described in state-s is valid given the observations x.

Creating a function that can validate a model given a world state is no easy problem. The validation function may have to deal with unanticipated model changes, models that are very different than the current one, and models that operate using new ontologies.

It's not totally clear how to define such a validation function, and if we could, that may solve most of the strong AI problem in the first place.

If we don't care about strong improvements to our agent, then we may be able to write a validation function that disallows almost all model changes. By allowing only a small set of understandable changes, we could potentially create agents that we could be certain would not go insane, at the cost of being unable to grow significantly more sane than they start out. This may be a cost we want to pay.

The holodeck problem

The holodeck problem is closely related to wireheading. While wireheading directly stimulates a reward center, the holodeck problem occurs when an agent manipulates its own senses so that it observes a specific high value scenario that isn't actually happening.

Imagine living in a holodeck in Star Trek. You can have any kind of life you want; you could be emperor. You get all of the sights, smells, sounds, and feels of achieving all of your goals. The problem is that the observations you're making don't correlate highly with the rest of the world. You may observe that you're the savior of the human race, but no actual humans have been saved.

Real agents don't have direct access to the state of the world. They don't just "know" where they are, or how much money they have, or whether there is food in their fridge. Real agents have to infer these things from observations, and their observations aren't 100% reliable.

In a decision agent sense, the holodeck problem corresponds to the agent manipulating its own perceptions. Perhaps the agent has a vision system, and it puts a picture of a pile of gold in front of the camera. Or perhaps it just rewrites the camera driver, so that the pixel arrays returned show what the agent wants.

If you intend on making a highly capable agent, you want to be able to ensure that it won't take these actions.

Decision Theoretic Observation Hacking

A decision theoretic agent attempts to select actions that maximize its utility based on what effect they expect those actions to have. They are evaluating the equation p(state-s, a -> o; x)U(o) for all the various actions (a) that they can take.

As usual, U(o) is the utility that the agent ascribes to outcome o. The agent models how likely outcome o is to happen based on how it thinks the world is arranged right now (state-s), what actions are available to it (a), and its observations of the world in the past (x).

The holodeck problem occurs if the agent is able to take actions (a) that manipulate its future observations (x). Doing so changes the agent's future model of the world.

Unlike the wireheading problem, an agent that is hacking its observational system still values the right things. The problem is that it doesn't understand that the changes it is making are not impacting the actual reward you want the agent to optimize for.

We don't want to "prevent" an agent from living in a holodeck. We want an agent that understands that living in a holodeck doesn't accomplish its goals. This means that we need to represent the correlation of its sense perceptions with reality as a part of the agent's world-model .

The part of the agent's world-model that represents its own perceptual-system can be used to produce an estimate of the perceptual system's accuracy. Perhaps it would produce some probability P(x|o), the probability of the observations given that you know the outcome holds. We would then want to keep P(x|o) "peak-y" in some sense. If the agent gets a different outcome, but its observations are exactly the same, then its observations are broken.

We don't need to have the agent explicitly care about protecting its perception system. Assuming the model of the perception system is accurate, and agent that is planning future actions (by recursing on its decision procedure) would predict that entering a holodeck would cause the P(x|o) to become almost uniform. This would lower the probability that it ascribes to high value outcomes, and thus be a thing to avoid.

The agent could be designed such that it is modeling observations that it might make, and then predicting outcomes based on observations. In this case, we'd build p(state-s, a -> o; x) such that prediction of the world-model are predictions over observations x. We can then calculate the probability of an outcome o given an observation x using Bayes' Theorem:

.

In this case, the more correlated an agent believes its sensors to be, the more it will output high probabilities for some outcome.

Potential issues with this solution

Solving the holodeck problem in this way requires some changes to how agents are often represented.

1. The agent's world-model must include the function of its own sensors.
2. The agent's predictions of the world should predict sense-perceptions, not outcomes.
3. On this model, outcomes may still be worth living out in a holodeck if they are high enough value to make up for the low probability that they have of existing.

In order to represent the probability of observations given an outcome, the agent needs to know how its sensors work. It needs to be able to model changes to the sensors, the environment, and it's own interpretation of the sense data and generate P(o|x) from all of this.

It's not yet clear to me what all of the ramifications of having the agent's model predict observations instead of outcomes is. That's definitely something that also needs to be explored more.

It is troubling that this model doesn't prevent an agent from entering a holodeck if the holodeck offers observations that are in some sense good enough to outweigh the loss in predictive utility of the observations. This is also something that needs to be explored.

Safely Combining Utility Functions

Imagine you have two utility functions that you want to combine:

and

In each case, the utility function is a mapping from some world state to the real numbers. The mappings do not necessarily pay attention to all possible variables in the world-state, which we represent by using two different domains, each an element of some full world state (). By we mean everything that could possibly be known about the universe.

If we want to create a utility function that combines these two, we may run into two issues:

1. The world sub-states that each function "pays attention to" may not overlap ().
2. The range of the functions may not be compatible. For example, a utility value of 20 from may correspond to a utility value of 118 from .

Non-equivalent domains

If we assume that the world states for each utility function are represented in the same encoding, then the only way for is if there are some dimensions, some variables in , that are represented in one sub-state representation but not the other. In this case, we can adapt each utility function so that they share the same domain by adding the unused dimensions to each utility function.

As a concrete example, observe the following utility functions:

red marbles
blue marbles

These can be adapted by extending the domain as follows:

red marbles, blue marbles
red marbles, blue marbles

These two utility functions now share the same domain.

Note that this is not a procedure that an be done without outside information. Just looking at the original utility functions doesn't tell you what those sub-utility functions would prefer given an added variable. The naive case is that the utility functions don't care about that other variable, but we'll later see examples where that isn't what we want.

Non-equivalent valuations

The second potential problem in combining utility functions is that the functions you're combining may represent values differently. For example, one function's utility of 1 may be the same as the other's utility of 1000. In simple cases, this can be handled with an affine transformation.

As an example, from our perspective of and , should be valued at only 2 times instead of the 10 times as is shown above. One of the ways that we can adapt this is by setting .

Note that non-equivalent valuations can't be solved by looking only at the utility functions. We need to appeal to some other source of value to know how they should be adapted. Basically, we need to know why the specific valuations were chosen for those utility functions before we can adapt them so that they share the same scale.

This may turn out to be a very complicated transformation. We can represent it in the general case using arbitrary functions and .

Combining Utility Functions

Once we have our utility functions adapted so that they use the same domain and valuation strategy, we can combine them simply by summing them.

The combined utility function will cause an agent to pursue both of the original utility functions. The domain extension procedure ensures that the original utility functions correctly account for what the new state is. The valuation normalization procedure ensures that the original utility functions are valued correctly relative to each other.

A more complicated case

Let's say that you now want to combine two utility functions in a more complex way. For example, lets say you have two utility functions the use the same valuation and domain:


Let's say our world is such that corresponds to a location on a line, and . One of the utility functions incentivizes an agent to move up the line, the other incentivizes the agent to move down the line. These utility functions clearly have the same domain, and we're assuming they have the same valuation metric. But if we add them up we have utility 0 everywhere.

To combine these, we may wish to introduce another world-state variable (say for switch). If then we want to use , and if then we want to use . You could think of this as "do something when I want you to, and undo it if I press the button."

One way that we could do this is to extend each utility function to include the new state variable, and set the utility of the function to 0 in the half of the new domain that we don't want it to be active. To do this, we could create:

if else
if else

When we sum these adapted utility functions, we find that we have a nice utility function that incentivizes the agent to move towards 2 if the switch is on and to move towards -2 if the switch is off.

Except that there's a pathological case hiding out here. What if the agent can control the state of the switch?

In that case, an agent that finds itself starting out at state (n=2, s=0), may just flip the switch rather than moving.

Agents aren't allowed to throw the switch

The behavior of an agent with utility function is clearly not what we want. To fix it we have a few options:

1. change how we combine its components and , perhaps by weighting them in some way
2. introduce one or more other component utility functions
3. change the state space upon which the functions operate

Weightings on and

Many of the attempts to change the weights of the component functions will lead to un-intended consequences.

If you start the agent in state (n=-2, s=1), then you can incentivize it not to press the button by weighting the corresponding component utility function highly, for example use .

But this just makes the reverse problem worse. If the agent reaches (n=2, s=1) and an operator decides to flip the switch to cause the agent to go in reverse, then the agent now has even more incentive to flip the switch back.

MIRI's corrigibility paper proves a theorem that is analogous to this (Theorem 1 in the paper).

Including "correction" component utility functions

A related approach (also from MIRI's corribility paper) to preventing the agent from actuating the switch on its own is called utility indifference. In this formulation, the combined utility function of the agent includes a third utility function, whose only purpose is to make the agent agnostic to flipping the switching. It does this by exactly counteracting any utility that the agent may gain by changing the state of the switch. This function therefore depends on the full state.

In the 1-dimensional line problem described above, the correction utility function would be defined as:

if remains the same,
if and is about to change, and
if and is about to change

This correction function makes the value of flipping the switch exactly equivalent to the value of the current state, so the agent will be indifferent about it. In this example, there's always a better thing to do (unless you're at an end of the line).

The correction function as described in MIRI's paper also has a dependence on other timesteps (it's not markov). Furthermore, the paper describes some serious issues with it.

Adjusting the state space of the component utility functions

By adjusting the state space of the component utility functions, we can define a utility function that correctly values what we think we value.

Ultimately, what we care about is that the agent does not change the switch state itself. If we can represent this as a part of the world state, then we can do a domain extension on the original utility components.

Let $i$ be a variable representing, roughly "the agent will pursue a strategy to change the state of the switch". Then we can construct new utility components as follows:

if else
if else

If we further care that the agent doesn't do anything to tamper with the switch, or to manipulate people into treating the switch in one way or another, these cares can be dealt with in the same way. Construct a world-state representation that allows the agent to model its own impact, and then correctly domain extend the component utility functions.

To a large extent, this passes the buck from creating good value functions to determining how an agent can create intentional models of itself. I think this is a good change in perspect for two reasons.

1. Changing the domain of the utility function accurately captures what we care about. If we're attempting to adjust weights on the original utility functions, or add in compensating utility functions, then we are in some sense attempting to smuggle in a representation of the world that's not contained in our original world-state. We actually do care about whether the agent has an intention of flipping the switch. The only reason not to make the agent care about that also is if its not feasible to do so.

2. Figuring out how to get an agent to model its own intentions is a problem that people are already working on. The actual problem of representing an agents intention to flip the switch reminds me of one-boxing on Newcomb's problem, and I'm curious to explore that more. Using an agents representation of itself as part of its world model seems intuitively more tractable to me.

The main question left is "how do you create a utility function over the beliefs of the agent?"

Wireheading Defense

I once talked to somebody about doing heroin. I've never done it, and I was curious what it felt like. This person told me that heroin gave you the feeling of being love; that it was the best feeling he'd ever felt.

Hearing that did not make me want to do heroin more, even though I believed that it would cause me to feel such a great feeling. Instead, I became much more concerned about not letting myself give into the (admittedly slight) possibility that I might try it.

When I thought about trying it, I had a visceral reaction against it. The image that popped into my mind was myself, all alone in feeling love, ignoring the people that I actually loved. It was an image of being disconnected from the world.

Utility Functions

Utility functions form a large part of agent modeling. The idea is that if you give a rational agent a certain utility function, the agent will then act as though it wants what the utility function says is high value.

A large worry people have about utility functions is that some agent will figure out how to reach inside its own decision processes, and just tweak the number for utility to maximum. Then it can just sit back and do nothing, enjoying the sensation of accomplishing all its goals forever.

The term for this is wireheading. It hearkens to the image of a human with a wire in their brain, electrically stimulating the pleasure center. If you did this to someone, you would in some sense be destroying what we generally think of as the best parts of a person.

People do sometimes wirehead (in the best way they can manage now), but it's intuitive to most people that it's not good. So what is it about how humans think about wireheading that makes them relatively immune to it, and allows them to actively defend themselves from the threat of it?

If I think about taking heroin, I have a clear sense that I would be making decisions differently than I do now. I predict that I would want to do heroin more after taking than before taking it, and that I would prioritize it over things that I value now. None of that seems good to me right now.

The thing that keeps me from doing heroin is being able to predict what a heroin-addicted me would want, while also being able to say that is not what I want right now.

Formalizing Wirehead Defense

Consider a rational decision maker who uses expectation maximization to decide what to do. They have some function for deciding on an action that looks like this:

function decide(state-s):
  max_a = 0
  for a in available actions:
    utility(a) = 0
    for outcome o in possible outcomes:
      utility(a) += p(state-s, a -> o)*Utility(o)
    end for
    if (max_a == 0 or (utility(a) > utility(max_a)))
      max_a = a
    end if
  end for
  return action max_a

The decider looks at all the actions available to them given the situation they're currently in, and chooses the action that leads to the best outcome with high probability.

If the decider is making a series of decisions over time, they'll want to calculate their possible utility recursively, by imagining what they would do next. In this case, the utility function would be something like:

function transition(old_state, action_a):
  return new_state obtained by taking action_a in old_state;

function Utility(test_state):
  current_value = value(test_state)
  future_value = value(transition(test_state, decide(test_state)))
  return (current_value + future_value)

The transition function simulates taking an action in a given situation, and then returns the resulting new situation.

In the Utility function, the overall utility is calculated by determining the value of the current situation plus the value of the next situation as predicted by the decide() function.

To determine the value of a situation, the value() call just returns the observed value of the current world state. It may be  a table of (situation, value) pairs or something more complicated.

In this way, we figure out what utility we get by seeing what the value is on exact next step, and adding to it the expected value for subsequent steps. This process could recursively call itself forever, so in practice there would be either a recursion depth limit or some stopping criterion in the states being tested.

This recursion can be thought of as the robot simulating its own future actions.

The wireheading threat appears if we find a state or set of states in the future that provide high utility as calculated by this function but don't correspond to a high utility in the outside world (perhaps as determined by the designers of this robot).

In the traditional story, the robot finds a way to reach into its own code and tweak the value() function so that it returns only high numbers. Basically, it performs brain surgery on itself.

To consider a robot defensive against wireheading, we would want it to assign low utility to performing brain surgery on itself, even while it understands that it would later achieve very high self-reported utility.

Approaching a wirehead decision

Let's say that the above algorithm is computing a policy for future actions, and it comes to consider an action that would result in what outside observers would call wireheading. Maybe it is considering changing a line of its own code, or taking heroin, or submitting to brain surgery. What is the above algorithm actually doing in that case?

To get to this point, the robot must have called the function "decide(s)" on a state where it is not currently wireheaded. In the course of figuring out its next action, the robot will consider an action that changes the robot itself in some way.

The line "utility(a) += p(s, a->o; x)*Utility(o)" calculates the probability that the action would lead to the outcome, then multiplies it by the utility of the outcome. In this case the action is brain surgery and the outcome is having a new "value()" function.

Whether or not this is a good plan depends on the "Utility(o)", which will just recursively call the "decide(o)" function again to find future value.

The crucial point here is that when "decide(o)" is called, the state "o" is such that a different type of decision making is now happening. Now, instead of simulating its own future actions, the robot should be simulating the actions of itself with a different program running.

Not much has been said up to now about what this "state" thing is. In some sense, it represents everything the robot knows about the world. Where objects are, what they are, how does physics work, etc.

What if the robot doesn't consider it's own state?

If the robot does not consider its own code (and other features) as a part of the state of the world, then the wireheading action would not clearly modify the world that the robot knows about. The decision algorithm would keep on predicting normal behavior after the wireheading had occurred: "sure you had brain surgery, but you still think the same way right?"

In this case, the robot may choose to wirehead because its decision algorithm calculated that it would be useful in some normal way. Once the wireheading had been done, the robot would then be making decisions using a different algorithm. The wireheaded robot would stop pursuing the plan that the original robot had been pursuing up to the point of being wireheaded, and begin to pursue whatever plan the wireheaded version of itself espoused.

This is equivalent to how humans get addicted to drugs. Few (no?) humans decide that being addicted to heroin would be great. Instead, heroin seems like a way to achieve a goal the human already has.

People may start taking heroin because they want to escape their current situation, or because they want to impress their friends, or because they want to explore the varieties of human consciousness.

People keep taking heroin because they are addicted.

What if the robot does consider its own state?

If the robot considers its own state, then when it recurses on the "decide(o)" it will be able to represent the fact that its values would have changed.

In the naive case, it runs the code exactly as listed above with an understanding that the "value()" function is different. In this case, the new "value()" function is reporting very high numbers for outcomes that the original robot wouldn't. If the wireheading were such that utility was now calculated as some constant maximum value, then every action would be reported to have the same (really high) utility. This makes the original robot more likely to choose to wirehead.

So simply changing the "value()" function makes the problem worse and not better.

This would be equivalent to thinking about heroin, realizing that you'll get addicted and really want heroin, and deciding that if future you wants heroin that you should want it to.

So considering changes to its own software/hardware isn't sufficient. We need to make a few alterations to the decision process to make it defensive against wireheading.

The difference between "what you would do" and "what future-you would do"

The problem with not taking into account a preference change after wireheading is that the robot would incorrectly predict its post-wirehead actions.

The problem with just packaging robot preferences in with the world-state of the prior algorithm is that, even though the robot is then able to correctly predict future actions, the valuations aren't consistent. A wireheaded robot takes the actions it thinks are highest utility, it just happens to be choosing actions the original would think were terrible.

In order to defend against wireheading, you need to:

1. accurately predict what a future (wireheaded) version of yourself would do
2. determine a value of future states that depends only on your current utility function

To get item 2 without sacrificing item 1, we're going to adapt our decision algorithm slightly.

function decide2(state-s):
  max_a = 0
  max_plan = 0
  for a in available actions:
    utility(a) = 0
    for outcome o in possible outcomes:
      (o_value, o_plan) = Utility2(o)
      utility(a) += p(state-s, a->o)*o_value
    end for
    if (max_a == 0 or (utility(a) > utility(max_a)))
      max_a = a
      max_plan = o_plan
    end if
  end for
  return (max_a, [s, max_plan])
  
function Utility2(test_state):
  current_value = test_state.value(test_state)
  (next_action, state_plan) = decide(test_state)
  
  future_value = 0
  for state in state_plan:
    future_value = test_state.value(state)
  
  test_state_utility = (current_value + future_value)

  return (test_state_utility, state_plan)

In this case, the decide2 function returns a tuple. The first element of the tuple is the next action to take. That's the same as the only return value in the original decide function. The second element of the decide2 function is a complete future-history: a list of the expected states given what the robot thinks it will decide.

The Utility2 function also returns a tuple. The new element is the same state-plan as the decide2 function. In this case, the Utility2 function re-values each of the proposed future states using the value function of the current state. If a predicted version of a robots code makes decisions using a different value system, all the robot cares about is if its decisions lead to valuable states in its current value system.

Wirehead defense isn't wirehead immunity

The adapted decision algorithm described above will avoid wireheading when wireheading obviously results in lower utilities. It will not avoid doing all behaviors that a human might think of as wireheading. It may choose to do the equivalent of heroin if the risk of addiction is low, or if the potential gain (as measured using the current utility function) is high.

The above algorithm also won't stop wireheading if the robot gets tricked into it. As long as the algorithm can "see it coming" in some sense, it will attempt to avoid it. To see it coming, the algorithm needs to have access to its own code. It also needs to be able to modify a representation of its own code and simulate the modifications. There are some circumstances in which we may not want the robot to simulate arbitrary changes to its value function.

In the worst possible case, an attacker could arrange a situation in which the robot has the opportunity to change its value function in some complicated way. The attacker may be able to propose a clever value function that, if simulated, executes arbitrary code on the robot. The risk for this seems higher for more complicated value functions. There are ways to mitigate this risk, but it's not something to take lightly.

The Woodcarver

Once there was a wood carver who lived at the edge of the village. He was the best wood carver for miles and miles, but he was also very clumsy. People would come to marvel at his carvings, and then giggle as he dropped his tools or spilled his coffee.

The wood carver didn't mind the giggling. He had a fine life, and wanted for nothing. Nothing, that is, except a child.

One day, as he was walking the woods to find good stock, he came upon a mysterious stump. The stump glowed like a full moon in the brightest daylight. It was the most marvelous wood that the carver had ever seen, and he brought it back to his shop immediately.

For seven days and seven nights, the carver worked on the strange wood. When he was done, he looked in pride at a wooden boy. The carver was only a little surprised when the boy's eyes opened, and the boy looked back at him.

But as the wood carver stared into the boys eye's he realized something. There was nothing within those eyes, no spark of recognition. The wooden boy was the blankest of blank slates.

The woodcarver wasn't worried by this. He always thought he'd make a great father, and he set to the task with diligence. He taught the wooden boy how to move his arms, how to walk, how to talk. Finally, he taught the boy his most cherished knowledge: the carving of wood.

But even as a father, the carver was still very clumsy. He would demonstrate how to walk, only to trip over his own feet. He would try to show how to talk, only to mis-speak or mumble his words. Even at wood-carving, the carver would demonstrate a cut and drop his knife to the floor.

The wooden boy learned all these things. The boy learned to walk and to trip, to talk and to mumble, to carve and to drop tools. The boy was a very good student.

When the wood carver told the boy not to trip, the boy learned to say that you shouldn't trip. Still the boy tripped, but know he seemed contrite about it.

The wood carver and his knew son lived happily for many years. As the wood carver aged, he marveled that the boy did not.

There came a day when the wood carver had to be laid to rest in a box of his own design. The wooden boy cried, just as he had been taught. Then he went home and carved wood.

One day, many years later, the boy was gleaning in the woods for new carving stock. The boy came upon a strange and eiry stump. It glowed with the light of the full moon, even at the brightest part of the day. The wooden boy knew exactly what to do.

Mutual Information in a Causal Context

Mutual information is the idea that learning something about one variable might tell you about another. For example, learning that it's daytime might give you information about whether the sun is shining. it could still be cloudy, but you can be more sure that it's sunny than before you learned it was daytime.

Mathematically, mutual information is represented using the concept of entropy. The information gained about a variable X, assuming you learn Y, is given by:

In this case, is a measure of the entropy. It is given by

Mutual information is supposed to be symmetric (), but I'm interested in how that works in a causal context.

Let's say you have a lightbulb that can be turned on from either of two light switches. If either lightswitch is on, then the bulb is on. Learning that one light switch is on tells you the bulb is on, but learning that the bulb is on does *not* tell you that one specific light switch is on. It tells you that at least one is on (but not which one).

Let's assume for the sake of argument that each light switch has a probability p(on) = 0.25 of being turned on (and equivalently a probability p(off) = 0.75 of being off). Assume also that they're independent.

The entropy of switch one is



Since either switch has a probability of 0.25 of being on, and they're independent, the bulb itself has a probability of 7/16 of being on.

The entropy of the bulb is



If you know switch 1's state, then the information you have about the light is given by



If instead you know the bulb's state, then the information you have about switch 1 is given by



So even in a causal case the mutual information is still symmetric.

For me the point that helps give an intuitive sense of this is that if you know S1 is on, you know the bulb is on. Symmetrically, if you know the bulb is off, you know that S1 is off.