Walking Through Walls

I think my favorite super power is probably the ability to walk through walls. I’ve always wanted to be able to break into any building, escape any pursuer, or drop through the floor instead of taking the stairs. It’s one of my main regrets that I’ll never be able to do it.

The next best thing to being able to do something is knowing as much as you can about it. In elementary school I’d learned that atoms were mostly empty space. You have the electrons and the nucleus, but it seemed to me that if you managed to squish two atoms into the same space then they might be able to pass through each other.

I was pretty excited to learn, when I got to middle school, why exactly objects couldn’t do that. For one thing, if you got the atoms occupying roughly the same space, then the electromagnetic forces would interfere with each other and the electrons of both atoms would be disrupted. This would severely mess up any chemistry that was going on with those atoms, and probably do very bad things to a person walking through a wall (and the wall itself).

Luckily, my youthful attempts to pass through my bedroom wall were doomed to fail for a reason that didn’t involve all of my molecules coming apart. Originally, I thought this was due to electron repulsion. According to my high school physics teachers, atoms can’t get that close together because the electrons of the atoms repell each other. Just like magnets of the same pole, two electrons will stay as far apart as they can. You can’t make use of all that empty space within an atoms because the electrons form a kind of force field to keep other atoms out of their own territory.

It turns out that electron repulsion isn’t actually what prevents objects from passing through each other. It’s actually a quantum effect called electron degeneracy pressure, Basically, two electrons can’t possibly be in the same place at once (that’s part of the Pauli exclusion principle). When electrons get too close to each other, they must assume different energy levels. This means that to bring electrons close together, you need to add enough energy to put most of them into very high energy states. The closer objects come, the more energy you need. On the macro-level, that manifests as degeneracy pressure. That’s why objects feel solid.

Understanding this almost makes up for not being able to walk through walls.

Copyright Unbalanced

I’ve been reading Copyright Unbalanced lately, and it gives a pretty good description of what copyright is for and why it’s important:

Like all other forms of property, copyright exists to address an externality problem. Because the author of a creative work, such as a song, cannot exclude others from the benefits her work creates, authors who publish works are creating a positive externality. The problem is that if authors can’t internalize at least some of the positive externality they produce, then they will have only a weak incentive to create and publish works. Put another way, if authors have no way to exclude others from enjoying their works, and therefore can’t charge users for access, then they won’t produce as many works as they otherwise would, making everyone worse off. Copyright addresses this externality problem by creating a legal right to exclude others from enjoying the work without the author’s permission. If authors can sell permission for money, they can capture a higher proportion of the benefits they create, and their incentive to produce creative works in the first place will increase.

The book also goes over many of the things that are wrong with copright as it’s currently implemented in this country. I was especially interested in this tidbit:

Congress is supposed to represent the public’s interest, but it has abdicated that responsibility. As Jessica Litman has carefully documented, Congress has turned over the responsibility of crafting copyright law to the representatives of copyright-affected industries.13 That is, lobbyists write the copyright laws—not just figuratively, but literally.

For more than 100 years, copyright statutes have not been forged by members of Congress and their staff, but by industry, union, and library representatives who meet (often convened by the Copyright Office) to negotiate the language of new copyright legislation. As Litman explains, “When all the lobbyists have worked out their disagreements and arrived at language they can all live with … they give it to Congress and Congress passes the bill, often by unanimous consent.”

So to sum up, copyright was created to benefit the public by making artists more willing to create. It then got taken over by the artists(or more specifically, the marketers and labels) who pushed to have it extended beyond any reasonable benefit to society. The current state of affairs is pretty sad.

Relativity and your smartphone

Einstein’s theory of general relativity has dramatically changed life on our planet. It’s used in a lot of different technologies, but perhaps the most surprising place to find the theory of relativity is in your smartphone. Smartphones account for general relativity in two different ways.

The place that it’s most commonly pointed out is in GPS. Your phone figures out where it is by calculating the distance to a number of satellites. It does this by measuring the time of flight of a signal broadcast by each satellite. Once the phone knows how far away different satellites are, it can do triangulation on the known positions of the satellites to figure out where you are. This location measurement can be pretty precise (on the order of a meter).

The precision of GPS is possible because your phone takes into account special relativity in the form of time dilation. Satellites are travelling very fast with respect to a stationary smartphone. That high speed means that time goes slower for the satellite, and the clock it uses to calculate time of flight is off. Your phone takes that into account when calculating how long it took the signal broadcast by the satellite to get to wherever you are.

General relativity comes into play because satellites are so much higher than your phone, which means that they experience less of Earth’s gravity than you do (note that this is different from microgravity). Since satellites experience less gravity than you do, time travels faster for them. So there are really two relativistic effects that need to be taken into account to actually figure out how fast time is travelling for the satellite, which can help tell how long it takes for a radio signal to travel from the satellite to your phone.

The second way that a smartphone takes general relativity into account is far simpler. Your phone has an accelerometer in it that measures acceleration on the phone. This is how your phone knows which way you’re holding it. It’s also how it makes those cool light-saber sounds when you swing your phone around.

When you’re having a light-saber duel, your phone is measuring the acceleration applied by your wild jabs and lunges. No relativity there. However, when the phone is stationary and it detects which way it’s oriented, it’s measuring gravity. Gravity isn’t acceleration, but it is indistinguishable under the theory of general relativity. It’s only through the effects described by general relativity that your phone works the way that it does.

Science! It’s closer than you think!

Men in Black ethics

I remember really enjoying the Men in Black movies when I was younger. They’ve got explosions, aliens, flying saucers, Will Smith. They’ve got everything that makes a movie great. However, the movies are missing one very important thing: good ethics.

It slipped by me when I was watching the movies as a kid, but humans in the Men in Black universe are kind of the galaxy’s village idiot. In the first movie it’s explained how television, computers, and basically every other technology was given to us by aliens. We’re apparently not capable of developing any of these things ourselves.

In a universe where its easy to travel from one planet to another and there are all kinds of interesting planets with interesting life to visit, we humans are stuck on earth. We get the alien technology that they don’t want: television. They keep their faster than light travel for themselves.

The culture of craft and making that’s seen a resurgence in the past decade or so has me very excited. It shows that people are creative, interested in learning, and willing to build things that make the world a better and more exciting place to live. It’s humans making these inventions, and we celebrate those past humans who invent. People like Philo Farnsworth, the Wright Brothers, and Alan Turing.

When movies like MIB cast humans as incapable of inventing, they do a disservice to our culture and our history.

Why Transform?

I’ve just recently had an epiphany about signal processing. It’s kind of embarassing that it’s taken me so long to realize this, but all the transforms that I’ve been doing in classes are just to make the signal separable from the noise in my data.

That seems pretty simple, so let me back up and explain why it took me so long to realize this. I’ve been taking signal processing classes off and on for about five years now. The classes mostly have focused on a few transforms (fourier and wavelet mostly) and how they can be used to filter an incoming signal. We’ve made low pass filters, high pass filters, and everything in between. It was never quite clear to me why you use the tranform though. You can just do everything in the time domain.

I didn’t put too much thought into that because computations can be easier to do in the frequency domain. What’s convolution in time corresponds to multiplication in frequency. It can be faster to do some calculations in the frequency domain because of that correspondence. I understood that, and thought that I was using some transforms that brilliant people had invented just to speed up their computations. I had no intuition for how they could have devel0ped  the transform. How could they have known the transform would make calculations faster? I put it down to Laplace and Fourier just being more brilliant than me.

What I’ve recently come to realize is that, while Laplace and Fourier were indeed brilliant, their transforms serve a different purpose altogether. The speed up that I got in filter calculations is almost an afterthought to the real purpose of using a transform.

Filters only let through the frequencies that you want. This is obvious when you see plots of filters in the frequency (fourier) domain. I was clear on this from the outset. You use the Fourier transform to select frequencies, gotcha.

For some reason, this knowledge didn’t generalize like it should. I went around saying to myself that filters select different frequencies, and that convolution in time was multiplication in frequency, but I didn’t get that this was the whole point of the transform in the first place. Noise in the time domain is hard to separate from a signal, but in the frequency domain it can be very easy to separate.

And that is the key behind transforms. The real reason you do the transform isn’t so that you can do fast multiplication instead of slow convolution. The real reason to transform a signal to a new domain is because the new domain can make the parts of the signal you’re interested in easier to separate from everything else. That just happens to make the calculations faster too.

This separability comes up in all kinds of signal processing, pattern recognition, and machine learning. A transform may help anywhere where you want to separate one type of thing from another. Making it easier to separate the wheat from the chaff is why you would calculate features before feeding your data into machine learning algorithms.

My understanding of signal processing now revolves around three steps.

  1. Transform the incoming data so that the components you’re interested in are easy to separate from the components you’re not (separate the signal from the noise).
  2. Do whatever calculations you need to in order to get the output that you want.
  3. Transform the output to the domain you need it in; the new domain is usually, but not always, the same as the domain the data had in the first place.

Probability and Logic

When I first started learning math I focused a lot on formal logic and proofs. I had a lot of fun deriving things using induction, proof by contradiction, and simple direct proofs. It’s been a long time since I’ve done much of that, but I find myself thinking a lot about methods of proving things as I learn more about signal processing and statistics.

I spent some time recently studying hypothesis testing and signal detection theory for a classification problem I’m working on at school. What really surprised me about the two things was how similar they were to proof by contradiction. The main ideas in hypothesis testing is

  1. figuring what you want to show (called H1), and
  2. showing that the opposite of that (called H0) is unlikely

This is where the infamous p-value comes from. If you want to show that eating spinach gives people Popeye arms, you start by assuming that it doesn’t. This is called the null hypothesis and is denoted by H0. After you do a lot of measurements on people who have eaten spinach, you figure out how likely are their huge Popeye arms under the assumption that nothing at all has happened. That probability is called your p-value, and if it’s very low then you’ve got your “proof by contradiction”. A low probability that their Popeye arms are due to the null hypothesis indicates that there’s a high probability that something interesting is going on with those cans of spinach.

And because you’re doing statistics, it doesn’t actually prove anything. All it shows is that it’s more likely that spinach has an effect than that it doesn’t. It’s kind of a subtle point, and has led to a lot of mistaken or misleading scientific papers over the past few decades. That’s one of the reasons that a lot of people are calling for different methods of testing hypotheses (such as Bayesian methods [pdf]).

To my mind, Bayesian methods correspond more to a direct proof. That may make it easier to understand and get right, but it doesn’t mean that hypothesis testing’s p-values are useless. There’s room in science for all kinds of methods, just like so many proof methods can be useful. The key is to know your tools and understand their limitations.

And right off the bat we can see one of the main limitations with hypothesis testing using p-values. Since you’re doing something akin to “proof by contradiction”, you can’t compare different options very easily. You can say things like “Popeye arms are likely to be cause by eating spinach with p-value .02” or “Popeye arms are likely to be caused by excessive mastubation with p-value .03”, but you can’t compare those two hypotheses. One may be more likely to be true than the other, but you can’t easily tell just using p-values. Since you’re only comparing individual hypotheses to the null hypothesis, you don’t know how the hypotheses relate to each other.

That said, hypothesis testing and p-values can be a strong technique when used on the right problem; just like proof by contradiction.

Brass and Leather

I finally got around to doing something with the etchings that I did recently. I cut one of the etchings out and filed it into shape. I then drilled a couple of holes in it and shaped them to my wrist. Getting the right curve in the brass was pretty difficult, and I ended up spending a lot of time tapping the piece with a hammer trying to get things just right. Once I was satisfied, two rivets secured the piece to a bracelet that I’d made for it.

The finished Koi bracelet.

To keep a nice edge on the leather bracelet, I cut a strip that was twice as wide as I wanted it and folded the edges over. I used craft glue to keep them in place. I’m pretty satisfied with how it worked out.

The leather store had pretty nice magnetic snaps, which are much easier to use than buttons. The only problem is that they’re thick. The snap doesn’t stand out too much if you’re not looking for it, but I think if I make another bracelet I’ll try the standard buttons.

The snap on the bracelet works extremely well, but it's kind of bulky.

Science and the immorality of propaganda

I believe, hopelessly, that this morality should be extended much more widely; this idea, this kind of scientific morality, that such things as propaganda should be a dirty word.

– Richard Feynman

The philosophy of science is concerned mainly with a search for truth. Science provides a method to find the truth. At his core, a scientist is someone who seeks to bring his beliefs in line with reality by doing experiments to test his beliefs.

The truth is a scientist’s most sacred quality, and anything that hides the truth or confuses it is immoral. Looking around myself at the world that I live in, I see a lot of instances of people hiding the truth, or purposefully confusing an issue, or phrasing things in a misleading way. It’s clear why people seek to hide the truth. If you convince people to do something, you can make a lot of money or get a lot of power. But while this may be good for you in the short term, it seems like it’s bad for society and the world in the long term. Decisions get made not on the basis of what’s best, but on the basis of what has the best propaganda.

Politics and advertising are the two main fields that focus on distorting truth or obscuring it with rhetoric. This means that tasks as simple as choosing toothpaste and as complex and important as choosing a national leader are far more difficult than they need to be. My solution has generally been to avoid all kinds of ads and propaganda, and look for actual data before making a decision.

It would be nice to live in a world where this kind of conscious avoidance of ads wasn’t necessary. We already have social and cultural moral systems that prohibit bad behavior. I propose that we work to incorporate prohibitions of advertising and propaganda in these already existing systems. It won’t even be very hard: just treat advertisers and marketers the same way you treat people who are rude or smell bad. The social stigma will eventually push people away from propagandizing.

Entropy and Externalities

There’s a concept in economics called the externality that my environmentalist friends like to talk a lot about. An externality is a cost that exists for some enterprise, but it’s a cost on somebody other than the enterprise itself. The classic environmentalist example is that environmental damage is an externality for oil companies. Oil companies get a lot of money for extracting oil, and they sometimes don’t bother to take care of the environment as they do that. This is because environmental damage affects the local community, but not the oil company’s profits.

In many ways, it seems to me that an externality in economics is similar to entropy in physics. Entropy in a closed system never decreases, it’s only by ignoring some part of the system that you can say that you’re increasing order. So too with externalities. Those costs created by the enterprise still exist and still need to be paid for. The only reason that a company (or person, or government) can console themselves about not paying for those costs is that they’re not a part of the closed system that is the company and its customers and suppliers.

As the concept of externalities has come to be better understood by governments, there have been attempts to make destructive companies take responsibility for their actions. This seems like what I used to do in my physics classes by redrawing system boundaries to account for entropy. Redrawing system boundaries for economic externalities is usually done by creating laws that require companies to pay for any damage that they may create. One good example of this is Montana, where mining companies have been required to create trusts that are responsible for cleaning up after them.

What’s interesting to me is that the owners and CEOs of possibly damaging companies sometimes realize that they live inside the wider system that encompasses whatever damage is caused by their company. One example of that is Sunoco, which is the only oil company to sign on to the Ceres Principle.

I wonder if a better understanding of physics would cause people to realize the impact of such externalities to other parts of their lives. Even companies that work to mitigate externalities don’t do all that they could. Perhaps CEOs of potentially harmful companies should be required to take a course in thermodynamics to get a good understanding of entropy and system boundaries.

Batteries and Fuel Cells

I’ve been getting interested in energy storage lately; specifically how it can be used to make renewable energy sources more viable. In looking around the Internet, I found out about Lithium-Oxygen batteries. These are batteries that have a carbon anode that allows oxygen from the atmosphere to diffuse into the battery and react with lithium products to produce electricity. These batteries have the potential to have  a much higher energy density than Li-ion batteries because half of their reactant is gathered from the atmosphere.

This is kind of along the lines of how hydrogen fuel cells work. In fact, it turns out that both batteries and fuel cells work through the same chemical processes (called redox reactions). Chemically, there’s no difference between batteries and fuel cells. So if they’re chemically similar, what’s the difference between batteries and fuel cells?

It turns out that fuel cells are just batteries with externally stored reactants. In a battery, all of the chemicals are stored internally and never released. A fuel cell may undergo the same reactions but with reactants that are pumped in from external storage tanks. Fuel cells thus have the benefit of being easy to recharge just by replacing the reactants, but they are more of a hassle to use.

So those Lithium-Oxygen batteries that I was so interested in aren’t technically batteries. The lithium for the reaction is stored internally, but the oxygen is gathered from an external source (usually the atmosphere). This means that half of the device is a battery and the other half is a fuel cell. It has the convenience of a battery with the energy density of a fuel cell; the best of both worlds.