The Weakness of Rules of Thumb

There's a common issue that comes up when I'm teaching people how to design electronics: people new to designing electronics often feel like they need to obey all the rules of thumb.

This came up recently when somebody I was teaching wanted to make sure none of her PCB's electrical traces had 90 degree bends in them. There was on particular point on her board that couldn't be made to fit that rule.

When she realized that she'd have to put a 90 degree bend in her trace, her question to me was if that was "valid and legal". I probed her understanding a bit, and it seemed she was mostly thinking about the design guidelines as though they were laws, and she didn't want her design to break any laws.

This kind of thinking is pretty common, but I think it actively prevents people from designing electronics effectively. People with this mindset focus too much on whether their design meets some set of rules, and not enough on whether their design will actually work.

Where Design Guidelines Come From

Electronics is a domain that has a lot of rules of thumb. There are some pretty complicated physics behind how electrons act on a circuit board, and you can often simplify an engineering problem to a simple rule.

After a few decades of the industry doing this, there's a large collection of rules. New engineers sometimes learn the rules before learning the physical principles that drive them, and then don't know when the rules don't apply.

For example, the advice to not put 90 degree bends in electrical traces is due to the fact that sharp bends increase the reactive impedance of a trace. For high frequency traces, this can distort the electrical signal flowing down the trace.

For low frequency or DC signals, 90 degree bends are much less of an issue.

Ultimately, any rule of thumb rests on a concrete foundation of "if this condition holds, then that result will be produced in a specific manner". If you know the detailed model that drives the simple rule, you know when to ignore the rule.

Obeying The Rules or Designing a Working Project

The mindset that I sometimes see in new students is the idea that they need to follow all the rules. This makes sense if you assume that following all the rules will automatically lead to a working project. Unfortunately, the rules of thumb in electronics over-constrain a circuit board. Electrical engineers will often face the prospect of a design guideline that can't be satisfied.

The most effective response to a rule of thumb that can't be satisfied seems to be to ask about the physics behind the rule. Then the engineer can figure out how change the design to match the physical laws behind the general guideline.

The design guidelines were made for the people, not the people for the design guidelines.

Don't ask "how can I make this design satisfy all the design guidelines?"

Instead ask "how can I make this design work?"

Corrigibility

This post summarizes my understanding of the MIRI Corrigibility paper, available here.

If you have a super powerful robot, you want to be sure it's on your side. The problem is, it's pretty hard to specify what it even means to be on your side. I know that I've asked other people to do things for me, and the more complicated the task is the more likely it is to be done in a way I didn't intend. That's fine if you're just talking about decorating for a party, but it can cause big problems if you're talking about matters of life or death.

Overrides

Since it's hard to specify what your side actually is, it might make sense to just include an override in your super powerful robot. That way if it starts mis-behaving, you can just shut it down.

So let's say that you have an emergency stop button. It's big and red and easy to push when things go south. What exactly happens when that button gets pushed.

Maybe the button cuts power to the computer that runs your robot. The problem with that is that your robot may have set up a bunch of sub-agents online, and a simple power switch wouldn't effect them.

No, that e-stop button needs to have some pretty complex logic behind it to actually stop things.

Maybe the button puts your robot into a different mode. Before you press the button, the robot is trying to make you a sandwich. After you press the button, the robot is trying to shut down the peanut butter spreader without dropping a knife on anyone's foot.

The problem there is that if your robot is complex enough to persue a goal, then it may try to prevent you from stopping it. Right now, most robots just execute specific pre-programmed actions. A super-powerful robot will need to generate new actions to achieve difficult goals.

If a robot's goal is to make a sandwich, and it knows it won't be able to if you press the button, then step one of sandwich making may be to kill you so you don't press the button.

So the original goal mode, before the button is pressed, needs to incorporate an equanimity with the button being pushed. That's what it means for an robot to be corrigible in the sense that MIRI means.

Encouraging Equanimity

You can say that the thing your robot wants is represented by its value function: some mathematical representation of its desires (and those desires may not be recognizable as such to a human). The robot will do whatever maximizes the value function. You want to make that value function be ok with the e-stop button being pressed.

The way that the MIRI paper recommends doing this is to combine two different value functions. One of those value functions is where the robot does what you told it to. The other value function is where the robot shuts down nicely.

Combining those value functions is complicated, since the first one needs to have equanimity with the second.

Simple Combination of Value Functions

You might try to combine the value functions as simply as possible:
if the button isn't pressed, value is just the normal value function. If the button is pressed, then the value is 100 if the robot is shutting down and 0 otherwise.

The robot may do a bunch of different things over time, and the value it expects to get may change a lot after each action. Maybe in the course of making a sandwich it drops the bread, so the sandwich will be gross and provide low value. Or maybe it discovers that adding heroin to the sandwich makes people love it so much more, thus increasing the value of the sandwich.

Whatever the actual e-stop value function is, the normal value function could be much worse (maybe -10) or much better (maybe 200) depending on what actions the agent takes. This gives the agent a reason to try and manage whether the button gets pressed. This obviously doesn't help us.

Perfect Indifference

Since different actions have an impact on the normal value function, you may want them to have an effect on the e-stop value function as well.

Think about it this way:

Let's say you have a lottery ticket, and the winner hasn't been announced yet. If your friend offers you $20 for the ticket and you paid $1 for it, then that seems like a pretty good deal.

But what if you check online, and find out that your ticket is the winner. Now your friend would have to offer you much more than $20 to make you give up the ticket.

By making the e-stop value function dependent on what the robot already knows, you can make sure that it's always willing to obey the button.

The MIRI paper describes this as a combination of bonus and penalty. If the e-stop would ordinarily make the robot lose value, then the e-stop value function gives it a bonus equal to what it might have lost. If the e-stop would have made the robot win, then the e-stop value function gives it a penalty equal to what it may have lost in the normal mode.

This makes the robot perfectly indifferent to whether the button is pressed, but it does have some surprising impacts.

Problems with Perfect Indifference

The perfect indifference between e-stop and normal valuations is a problem. It means that there's nothing that the robot would ever do to preserve the e-stop button from destruction. You, the robot creator, value the e-stop button. The robot does not. And if it could pay a tiny cost to prevent the e-stop button from being disconnected by an errant roomba, it would not. It would not even let you know that the event had occurred.

So if something goes seriously wrong with your robot, you don't have any gaurantee that the e-stop button will still work when you press it.

The State of the Art

The description above is pretty much the state of the art in super-powerful e-stop buttons. The MIRI paper argues that knowing how to combine utility functions in this way is crucial to being able to depend on super-intelligent AIs. They point to a few different things that may help, but don't have a solution yet.

Things to look into later

Complex Shutdown Procedures

What if you want it to do something other than shut down? You can write your e-stop utility function to be anything you want. The more complicated it gets, the more you might want an e-stop for your e-stop.

Equanimity or Begrudging Acceptance

It doesn't make sense to me that you'd want your robot to be equally ok with the button being pressed or not pressed. In that case, why would it not just flip a coin and press the button itself if the coin comes up heads? To me it makes more sense that it does want the button to be pressed, but all the costs of actually causing it be pressed are higher than the benefit the robot gets from it. In this case the robot may be willing to pay small costs to preserve the existence of the button.

Depending on how the expected values of actions are computed, you could attach an ad-hoc module to the robot that automatically makes the cost of pressing the button slightly higher than the benefit of doing so. This ad-hoc module would be unlikely to be preserved in sub-agents, though.

Costs the Robot Maker Can Pay

Some of the assumptions behind the combined value function approach is that the normal value function is untouched by the addition of the e-stop value function.

You want your robot to make you a sandwich, and adding an e-stop button shouldn't change that.

But I'm perfectly ok with the robot taking an extra two minutes to make my sandwich safely. And I'm ok with it taking food out of the fridge in an efficient order. And I'm ok with it using 10% more electricity to do it.

There are a number of inefficiencies that I, as a robot builder, am willing to put up with to have a safe robot. It seems like there should be some way to represent that as a change to the normal value function, allowing better behavior of the robot.

Don't be afraid to move down the ladder of abstraction

In programming, there's an idea that's often called the ladder of abstraction. When you approach a problem, you can understand small bits of it and then put those together into larger pieces. By thinking about the problem with these larger pieces, you can get a better idea of what's going on.

A piece of advice that's often given is to move up the ladder of abstraction. Build a tool or function that does a low level thing, then just use that instead of looking at the lower level again. When you're starting from scratch on a project. This is a great idea. Using the ladder of abstraction allows you to quickly build things that work well, without having to keep solving the same problems over and over again.

However, there are times that it makes total sense to move down the ladder of abstraction, and look at what's going on as concretely as possible. This is especially true if you're debugging, and trying to fix something that's broken. Higher levels of abstraction obscure what's actually happening, which makes it difficult to isolate a problem so it can be fixed.

That's not to say that bugs should always be hunted in the weeds. Moving up the ladder of abstraction can help you to find out which particular component of a larger system is the source of the problem. Once that's been determined, you'll have to be more concrete with that component in order to solve the problem.

I also think this kind of model is good for solving more than programming problems. I've successfully used the idea of changing levels of abstraction to solve software bugs, fix hardware errors, and figure out how to deal with socially difficult situations. I would expect the idea to also work well in the softer sciences, like politics, but it seems like people often get stuck in at one level in those areas.

I sometimes have conversations with people about political systems that have clear problems, like how global warming is dealt with in the US. Sometimes, the solution proposed is systemic change. When I ask what that means, answers are often given at the level of the entire political system, rather than what specific people or groups should do. While I agree that "the system" needs to change, I think trying to change the system as a whole is ineffective. It would be much more effective to move down a level of abstraction and suggest who do what differently. Once that's done, the system is different. Systemic change has happened, but at a level that is easier to impact.

I think that some aspects of political discontent stem from being stuck at one level of abstraction. If you think that global warming or poverty needs to be solved at the level of the US government, then that's a huge problem and how are you going to do anything. It's easy to get overwhelmed that way. On the other hand, if you think of those problems as being generated by smaller sub-components, then you have places to look for actions that are achievable.

I don't have the answers for those large systemic political issues right now, but I do think that this idea from software can be of help. By being willing to move to more concrete understandings, we can solve problems that seem intractable.

Red Spirits

There's a story I heard from a friend at a recent Rationality meetup. It goes like this:

When Europeans were colonizing Africa, they told some Africans that they had to move their city. Their city was on a plains, and Europeans wanted a nice city like those at home: on a river. The Africans objected, saying that they couldn't live near the river. That's where the red spirits were, and people would suffer if they lived there. The Europeans made them do it anyway, because red spirits clearly don't exist. And then everyone got Malaria.

I think there are two thing going on here:

1. The colonizers were basically assuming that the moral of a fairy tale wasn't useful because the fairy tale wasn't true.

2. The Europeans were ignoring a story because it didn't fit in with the terminology that they already used to describe the world.

Fairy Tales With Morals

The colonizers assumed that, because the justification for a custom was contradicted by scientific understanding, the custom wasn't valuable. Red-spirits don't exist, so there's no reason to follow the custom.

The issue with this is that culture is subject to evolutionary pressure in the same way as genes. Cultures that lead to their adherents prospering are more likely to be present in the future, so any currently existing cultural artifact should be assumed to have served some important purpose in the past. That purpose may not be clear, or it may not be one that you agree with in a moral sense, or it may not apply in the present, but the purpose almost certainly existed.

This is basically a Chesterton's Fence argument at the cultural level. If the colonizers hadn't assumed that something they couldn't see a reason for had no reason, many people's lives could have been saved.

Science Stories

The terminology mistake is, in my opinion, even more dire. The colonizers argued that red spirits didn't exist, so people should move to the river. My friend who told me this story argued that the native villagers were mistaken for believing in red spirits, and that they should instead have believed in mosquitos.

The problem is that it isn't clear from the story that there's any difference between believing in mosquitos and believing in red spirits. Maybe red-spirits just means mosquitos. Or maybe it means malaria. The story doesn't have enough information to tell if the villagers were actually wrong about anything. When I brought this up, my friend couldn't answer any questions about what believing in red spirits actually meant to the villagers.

This is a failure mode that I think is common to people who describe themselves as scientists. I've noticed that people who describe observations in a way that doesn't use standard scientific jargon are often dismissed by people who are super into science. That happens even more if the description given uses words often used by marginalized sub-cultures.

People may be describing the exact same observations, and using the same model to describe those observations, but argue because they're using different terminology. It seems important to actually try to understand the model people have in their heads, and try to avoid quibbles about how they describe that model as much as possible.

There's another level to this if you assume that red spirits actually means ghosts in the western sense. Science-afficianados like to talk about the value of testability, but both mosquito and ghost models are testable. If you think that mosquitos carry tiny cells that can reproduce in your body and make you sick, that implies certain things you can do to prevent disease. If you think that ghosts get angry if you live in a certain area and make you sick, that implies other methods of prevention. People can try these prevention methods and see what works; they can test their theories. Just going in and saying that ghosts don't exist totally ignores any tests that the villagers actually did before you got there.

The Use of Red Spirits

Even assuming that red spirits literally meant believing in ghosts, that idea was saving lives at the time that the colonizers moved in. It seems like there are a lot of fairy tales like this: explanations whose constituent parts don't correspond with things in the real world, but that still accurately predict patterns in the real world.

I think that this is the source of a lot of cultural relativism and post-modernism. If someone thinks only of the outcome of explanations, then the actual truth value of the component parts of the explanation don't matter. All explanations are as valid as they are useful to their culture. Since cultural evolution strongly implies all stories and explanations serve some useful purpose, every story a culture tells is useful. Therefore all explanations are true.

The only mistake that I see with that is the idea that a useful fairy tale implies that each component of the fairy tale is useful.

Having an explanation whose component parts each correspond to something that can be observed in the real world is useful on its own. If you have such a model, you can mentally vary different aspects of it and predict the outcome. It's easier to use subjunctive reasoning on a model with true parts than a model that only useful when taken as a whole. You can even take small sections of the model and apply them in other circumstances.

Thinking that mosquitos cause malaria implies that you should avoid mosquitos, which (as we now know) can actually prevent you from getting sick. Thinking ghosts cause malaria might be useful if you end up avoiding mosquitos while also avoiding ghosts. Given that avoiding ghosts leads to avoiding mosquitos, the main reason to prefer one of these over the other is if one is less onerous.

Beliefs and stories rent out a share of your brain by being useful to you. As the landlord of your brain, it seems like the best thing to do is get beliefs that will pay you a lot in usefulness while requiring little mental real estate for themselves. Believing in and acting on the mosquito-malaria connection takes a certain amount of mental effort. I'm not sure what a belief in a ghost-malaria connection that actually led to avoiding malaria would entail, but I can guess that it would be more mentally costly than the mosquito-malaria alternative.

A non-mathematical introduction to the wave equation

Waves pop up everywhere in physics. They're most obvious at the beach, but waves are also used to describe light, pendulums, and all sorts of other things. Because waves can describe so much in physics, it's important to know what it actually means when you talk about waves.

The medium is not the wave

When physicists talk about waves, they mean something very specific. THey mean that some material is moving in a specific way. Waves at the beach are the best way to visualize this for me. Water waves have a very obvious medium: water. The waves are not the water, they're the way the water moves up and down over time. And the water doesn't just move up and down randomly; a peak on the water seems to travel towards the beach.
This is a very key point. Waves are just the way that some type of medium is moving. Light waves, for example, are just motion in the electromagnetic field.
So what causes the wave to move like it does? The answer to that relies on two separate ideas: energy input and strain in the medium.

Energy input

For most mediums (like water), if you leave it alone the waves will all die out and it'll be still. There needs to be some transfer of energy into the medium in orer for a wave to start. At the beach, the energy to start a wave often comes from wind. For light, the energy to start a light wave (photon) usually comes from electrons bouncing around.
Not all mediums are like this. In outer space, the electromagnetic field will keep a light wave going forever. That only works once the light wave gets started, which still takes energy input.

Strain

Strain in a medium is the tendency for it to return to it's original position. In water, strain is provided by gravity. Because gravity pulls on all the water in the ocean, the water tries to keep an even level. If wind pushes some water up higher, gravity will try to pull it down. The energy in the peak will get transferred to nearby water molecules, and the wave will move.

The wave equation

The way that strain in a medium causes a wave to travel is described by the wave equation
\frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x}
This equation shows how the time rate of change of the wave (the left hand side) is related to the strain in the medium (the right hand side). This equation is what's called a second-order differential equation, which just means we're taking derivatives twice (the \partialstuff).
To understand this equation, it helps to take a look at each side separately. To do this, we're going to look at snapshots of the wave in a couple of different ways.

Localizing space

Let's think about the left side of this equation first. The entire left side basically means "the way the medium changes with time". To get a feel for this, I like to imagine I'm treading water at the beach. Maybe I've swum out a few leagues and I'm just bobbing up and down with the water. You could plot my height above sea level over time, and that would give you one view of the ocean waves. We've made a little movie of the wave for a single point in space, and ignored all the other points.
The left hand side of the wave equation represents how fast my height at that point is changing (which is equivalent to saying that it's the second derivative of my height with respect to time).

Freezing time

The right hand side of this equation is another second derivative. This time it's a derivative with respect to space, instead of time (x instead of t). To understand this, think of freezing time instead of localizing space. If we take a snapshot of an ocean wave at a given time, it would be like a bunch of troughs and valleys in the water that don't change.
The second derivative on the right hand side of this equation represents the curvature of the water at any given point. That curvature is a pretty good measure of the strain the water is under. The spikier the water, the more the curvature, the more the strain.

Time and Space together

The wave equation is an assertion that curvature of a wave in space (the strain) is related to the way that the wave travels as time moves forward. The relation is given by the factor c^2 in the wave equation. The travel of some part of the wave through time is equal to the curvature of the wave at that point multiplied by speed of the wave in that medium. If we're talking about light waves, then c is the trusy old 3*10^8 m/s. If we're talking about sound waves or water waves, then c is going to be different. The exact value of c depends on what kind of medium we're talking about.

Conclusions

So that's the main thing that a wave is. A wave is just a way that strains in a medium move around, and you can describe that motion using a specific equation. The wave equation says that the curvature in the medium at a point is related to the rate at which the strain of the medium changes at that point.
If you know the properties of a medium and what the strain in the medium looks like now, you can calculate the curvature of the medium. That let's you figure out what the medium looks like later. You may also need some extra information, like the velocity of the wave now or the energy being put into the wave.

Feature Selection in SIFT

SIFT is Scale Invariant Feature Transform, which is a commonly used image recognition algorithm. Like most image recognition algorithms, it works by projecting a 2D image to a 1D feature-vector that represent what's in the image, then comparing that feature-vector to other feature-vectors produced for images with known contents (called training images). If vectors for two different images are close to each other, the images may be of the same thing.

I did a bunch of machine learning and pattern matching when I was in grad school, and the thing that was always most persnickety was choosing how to make the feature-vector. You've got a ton of data, and you want to choose only values for the feature-vector that are representative in some way of what you want to find. Ideally, the feature-vector is much smaller in size (in terms of total bytes) than the original image. Hopefully the decrease in size is achieved by throwing away inconsequential data, and not data that would actually be pretty helpful. If you're doing image recognition, it might make sense to use dominant colors of an image, edge relationships, or something like that. SIFT is much more complicated than that.

SIFT, which has been pretty successful, creates feature-vectors by choosing key points from band-pass filtered images (they use the difference of gaussians method). Since an image may be blurry or of a different size than the training images, SIFT generates a huge number of Difference of Gaussian variants of the image (DoG variants). By carefully choosing how blurred the images are before subtraction, DoG variants can be produced that band-pass filter successive sections of the frequency content.

The DoG variants are then compared to each other. Each pixel in each DoG variant compared to nearby pixels in DoG variants of nearby frequency content. Pixels that are maximum or minimum compared to neighboring pixels in nearby frequency DoGs are chosen as the features for the feature-vector, and saved as both location and frequency. These feature-vector elements (called keypoints) then encapsulate both space information (the location of the pixel in the original image), and frequency information (it's a max or min compared to nearby frequency DoGs).

Pixels that are too similar to nearby pixels in the original image are thrown away. If the original pixel is low contrast, it's discarded as a feature. If it's too close to another keypoint that is more extreme in value, then it's discarded. Each of the remaining feature-vector elements is associated with a rotation by finding the gradient of the DoG that the element was taken from originally. Finally, all of these points are assembled into 128 element arrays describing the point (position, frequency, rotation, nearby pixel information).

This means that if there are a large number of keypoints in the image, the feature vector used for image recognition could be even larger than the size of the image itself. So they aren't doing this speed up computation, it's solely for accuracy.

And SIFT does get pretty accurate. It can identify objects in images even if they're significantly rotated from the training image. Even a 50deg rotation still leaves objects identifiable, which is pretty impressive.

Difference of Gaussians

While reading about image recognition algorithms, I learned about a method of band-pass filtering I hadn't seen before. The Difference of Gaussians method can be used to band-pass filter an image quickly and easily. Instead of convolving the image with a band-pass kernel, the Difference of Gaussians methods uses two low pass filters and subtracts the two.

You start by blurring the image using a gaussian kernel, then subtract the blurred image it from a second less blurred version of the original. The result is an image with only features between the two blur levels. The two levels of blur used in the subtraction step can be varied to give different band pass limits.

This method can be effectively used for edge detection because it cuts down high frequency noise by subtracting a less blurred image. That means that noise in the image doesn't get treated as an edge. Apparently there are common blur levels that cause the Difference of Guassians method to approximate response of ganglion cells (light sensing nerve cluster in the eye)to light that falls on or near them.

Non-binary memory based on quantum effects

NAND Flash memory is made using floating gate transistors. This means that the gates to the transistors are electrically isolated from any other contact on the transistor. Quantum tunneling is used to deposit electrons on the gate, changing the transistor state. When many electrons are located on the gate, the transistor is on and stores a logical 1. When the gate has few electrons, the transistor is off and stores a logical 0. Changing the number of electrons on the gate is done by applying a voltage across the gate that sufficient to induce electron tunneling between an electrode and the isolated gate.

Newer forms of NAND flash actually store multiple bits on each transistor. Different levels of electrons on the gate correspond to different values, and the number of bits represented corresponds to the different number of discern-able electron levels (n bits = log2(n) electron levels).

A lot of thumb drives are made using NAND flash, which means people are just carrying around non-binary memory that makes use of quantum effects.

I think it's pretty clear that we already live in the future.

parts.io

parts.io is a new parametric search engine for parts. Supposedly it's claim to fame is that it finds parts from all distributors (like octopart) and presents a more complete picture on the lifecycle status of a part (super important). It seems to have promise, but I don't think it's better than digikey yet.

I liked the way parts.io presented information. After I make a selection in digikey, it can be hard to undo it or find out what selection I even made. The UI for parts.io is definitely better.

My main problem with parts.io is that it doesn't seem to know very much about electronics. If I'm making an RF filter, I can't use general purpose capacitors for it. Digikey makes it easy to select only RF rated capacitors. Parts.io doesn't even seem to know the difference exists.

Parts.io also has a units problem. Large or small part values are displayed in scientific notation with no units. The standard prefixes (k for kilo, M for mega, etc.) sometimes aren't recognized by their search engine. It also turns out capacitances are displayed in uF only, which is somewhat non-standard.

parts.io seems to have the right idea, but I'll wait until they're out of alpha to actually start using them.

LG HBS 700 Wireless Headphones Teardown

A friend gave me his broken pair of LG HBS 700 wireless headphones a while ago, and I finally got around to tearing them down.

headphones before teardown

Housing

The housing for these headphones is made up of two cavities on either side of the head. One side holds the battery, and the other holds pretty much everything else. The semi-rigid band connecting the two cavities contains wires for audio, power, and the several buttons on the non-circuitry side.

The buttons are plastic parts with supports on two sides. The supports deform, allowing pegs on the bottom of the buttons to depress electrical switches mounted on the PCB.

buttons from left cavity of housing

The housing also has cups to hold the headphones when they're not being used. The cups have magnets in them, which is pretty cool. Headphones already have magnets in them for the speakers, so this is a nice use of an incidental property of the speaker.

Left Cavity

The Left cavity contains the battery, three switches, and the left headphone. The audo for the left headphone comes directly from the right cavity, but the audio wires from the right cavity are soldered onto the left PCB. The left headphone is also soldered onto the left PCB, and the audio is routed through a couple of ~1cm traces. I'm assuming that they did that for ease of assembly. That way they only need one type of headphone pigtail.

battery in left cavity

The battery doesn't have many markings, but based on its size I'd guess it's about 200mAh to 220mAh.

The PCB in the left cavity has three buttons on it. These buttons look to be PTS530 buttons or something similar.

buttons on left PCB

Right Cavity

The PCB in the right cavity has all the main circuitry in these headphones. The USB jack is mounted on the bottom, along with vibrating motor, on/off switch, 26MHz oscillator for the BT IC, microphone, and a bunch of passives.

Bottom of main PCB

The top of the main PCB has three buttons, one of them canted slightly to make it fit. There's also a 8pin SOIC that's probably an EEPROM. The other main component on this board is the SoC handling pretty much all of the features of the device.

top of main PCB

The SoC is a CSR57F68. I couldn't find this part on the CSR web site, but based on this circuit I think it's safe to say it has integrated BT radio, battery charging, audio DAC and ADC with amplifiers, as well as a microcontroller core.

RF

I have a bit of experience with BT antennas, but RF is definitely not my expertise. I was pretty interested to see the RF hardware in this device, and there were a few surprises.

The antenna in these headphones is looks to be an inverted-L antenna. Interestingly, the antenna trace is routed on both sides of the board with vias connecting the sides. The edge of the PCB is also copper-clad. I'm assuming this was done to increase the radiation resistance of the antenna. Although it may have been done to push the impedance of the antenna closer to 50 Ohms.

The trace feeding the antenna is pretty long, and has a couple of 45deg angles in it. There's good stitching of the ground plane on one side of that trace, but not on the other (presumably because there are traces there on another layer). There are also a few fat traces (probably power) going directly underneath the antenna trace. From the looks of the board, there's a ground plane between the antenna trace and the power traces, so that's probably not causing any interference.

The feedline for the antenna is driven by the CSR chip through what looks to be a pretty standard balun.

One interesting things about this device is that there's no metal can over the RF components. Instead, they used a piece of conductive tape. The tape, which is a square the size of the CSR chip with a little tail, is placed over CSR chip. The tail trails off to a portion of the ground plane with the soldermask removed, thus providing a ground plane above the CSR chip as well.

This conductive tape tail passes over a couple of components and traces to get to the ground plane. To protect those components from being shorted to ground, there's a small piece of kapton tape between the conductive tape and the components.

This seems like a pretty finicky thing to do in manufacturing, and I can't help but think that they could have saved some money by re-designing the board to move those components.

My hypothesis is that they realized they had an EMI problem late in design after they'd already tooled up. I bet they just made a small change to the soldermask layer and left the programming for the Pick and Place machines the same. The conductive tape was probably just an emergency measure.

I'd be interested in checking out a later version of these headphones to see if they have the same tape solution.

Microphone

The microphone on these headphones is encapsulated in a rubber case. I'm guessing they cast the rubber case as a tube, put the microphone in the back of the tube, and then epoxied over it. That provides a waveguide for the audio from outside the housing directly to the microphone.

Microphone in housing, viewed head-on to the audio port.

The microphone itself looks like a standard electret microphone. The rubber case forms a kind of funnel. I cut the rubber case in half to see how it was formed internally.

Audio channel through the rubber microphone case.

The audio channel in the rubber microphone case is fairly narrow where it meets the external world. I'm assuming that was done to prevent water from getting in and ruining things. The aperture then widens to the total size of the microphone. Sometimes these chambers can act as Helmholtz resonators, but I think that in this case that won't happen. Because the microphone itself makes up the entirety of the back wall of the chamber, all of the sound energy is probably absorbed by the microphone instead of resonating.