Last month, I started experimenting with AcesoUnderGlass's Knowledge Bootstrapping method. I started out with a small project learning some facts about radiation and electronics. That worked well, so I then went to learn about something a little less straightforward: GPT-3's likely impact on AI safety.
I have to be honest, selecting this topic may have been a bit of a mistake. I was seeing a lot headlines and posts about GPT-3, and I had a pretty immediate emotional reaction of "GPT-3 isn't a big deal and people don't know what they're talking about."
I had a lot of fun writing this post, but I'm less happy with the final product than I expected to be. The thing is, I had that original emotional reaction to a bunch of headlines. I literally hadn't read the articles before I had decided to try to rebut them. When I went to go read the articles themselves, they were different than lots of the headlines and twitter hype had implied (shocking, I know). As I read more about GPT-3, I ended up changing my mind several times about my thesis. The post I wrote was much different than the one I was planning on.
In a lot of ways, this is great. I learned a lot about the current state of AGI research, and some of the current major players in AI safety. Deciding (before doing any research) to write a post about the topic is what gave me the motivation to actually read all those articles, and then read the background, and then read even more background. I haven't really kept up with these things for the past three years, so things had changed a lot since I had last looked into it. This project gave me the push I needed to finally learn how the transformer architecture really worked, as well as uncovering some of what DeepMind has been doing. I hadn't even known that MuZero existed before starting on this project.
All of this leaves me still excited about the knowledge bootstrap method, but I'm also noticing that keeping my motivation for a research project up is hard. When I have a blog post that I'm excited about writing, it's easy to put in effort to learn and write. When someone is wrong on the internet, of course I'll be burning to write about it. The more I wrote my post, the more clear it became that I was the one wrong on the internet.
That started sapping my motivation to write, even though the things that I was writing changed enough that I still stand by their accuracy.
As I closed in on answering most of the questions that I had come up with in my original question decomposition, I had such a different understanding of the topic that I realized I had an enormous number of new questions. I answered those questions, and then the questions that followed from that. Eventually, I came to the point where I thought I had a decent stance on the original safety question I had. At that point, I also realized how much detail there was to making a decent prediction about GPT-3's implications on future safe AI. And much of that detail was (and is) still unknown to me.
As I began to realize how much I'd have to research in order to do the topic justice, I could feel my excitement fade. Given that I've had a very stuttering relationship with this blog over the past decade or so, I could recognize that if I let my excitement about the topic drive me into perfectionism I wouldn't post anything. I also recognized that if I didn't post that blog entry, I'd feel like a failure and there would be a long drought in me posting anything at all.
I decided that I had enough for a high level post and wrote it, but I ended up writing a more milquetoast thesis than I had originally intended.
The most important thing for me in any kind of learning project is keeping up motivation. For work related topics, there's enough external motivation that I can power my way to a solution one way or another. For personal projects, even personal projects that could help me out at work, I need to stay interested throughout the process to have any hope of success.
My first experience of Knowledge Bootstrapping showed me that an emphasis on questions could help me keep my motivation up. By keeping my thoughts close to my original questions, it was easy to remember why I was doing the thing. This second experience of the process showed me that the blog output itself is still a big part of my motivation, and I'll need to plan around that in future projects.
I still view question decomposition as one of the more important components of Knowledge Bootstrapping. My original project had a very straightforward set of questions, and after I decomposed them it was easy to pull answers out of the sources I found. The hardest part of my Radiation+Electronics mini-project was finding sources that went deep enough to truly answer my questions.
The GPT-3/AI-safety mini-project was much different. When I first started decomposing questions about (before I started doing much reading) I had a ton of trouble figuring out what my primary question even was. Then I had trouble breaking that down into questions that reading books/papers could answer. I did my best to decompose the questions, then went and tried to answer them. That helped me orient myself to the field again, and when I came back to try answering my original questions I could clearly see some better question decompositions.
I ended up iterating this process several times, and I think for difficult or new topics this is probably crucial.
Elizabeth says that if you're not sure what notes to take when you're reading a source, you should go look at your questions again. That isn't great advice if you're having trouble with the decomposition step itself. I tried to address this by emphasizing the difference between what I was reading and what I already thought, and writing that down. That also helped me to figure out what my questions were, as I would sometimes realize I disagreed with something but be uncertain why.
Elizabeth emphasizes doing a brain dump of what you think about any given source before you really start reading it. I didn't do this very much in my first mini-project, but I did it for every source in this project.
I now think that my radiation+electronics mini-project didn't need much of the brain-dump step because I'd been thinking about the topic on and off for several years. I pretty much knew what I already knew. I also had a mindset that was focused on fact acquisition and model building, but I didn't have to worry much about conflicting information or exaggeration.
With GPT-3 and AI safety, there's no settled science about the topic. Everything is new, so people are all very excited. That meant that I had to be more careful with what sources I was using. I also didn't have a good handle on what questions I was trying to answer at the beginning, which meant that it was harder for me to notice what was important about each source's content.
This is where the pre-read brain-dump really shines. Before I did an in-depth read of any source, I'd free-write for a while about what I expected the source to say. I'd also write about what I personally thought about the expected content of the source. Then when I went to read the source, it was easy for me to notice myself being surprised. That surprise (or disagreement, or whatever) was the trailhead for the questions that I should have been asking at the beginning.
Interestingly, this seems to be the exact opposite of the reason that Elizabeth does it. She talks about how, if she didn't get her brain dump on paper, those thoughts would be floating around her head interrupting her reading process.
When I don't do the brain dump, I don't have any of those thoughts floating around my head as I read. That makes it really hard for what I read to latch on to what I already know. I'll sometimes read something and feel like I understand it, but then be unable to recall it even ten minutes (or ten pages) later. By brain-dumping, I prime my mind with all those thoughts so that I'm actually engaging with and thinking about the content in the source.
(Though Elizabeth also talks about this a bit here, where she says breaking the flow of a book is a sign of engagement).
In the past I've tried to address this with Anki. When I was reading textbooks cover to cover, I'd create flash cards of the major things I learned. This was generally very effective, but I've ended up with a truly enormous number of cards. I haven't kept up on my Anki training for the past couple weeks, and I now have hundred of cards in my backlog. It's also pretty slow to do this, and really takes me out of the flow of reading.
A good future workflow might be something more like:
- question decomposition
- source selection
- read and note take
- post-process notes and write blog post
- generate anki cards that are more focused
One of the things that held me back during my first Knowledge Bootstrapping mini-project was being unfamiliar with some of the markdown features that Elizabeth makes common use of. Because of that, my writing project was slower and more awkward than I think is Elizabeth's experience.
I took some time (really just ten minutes or so) to look up some of the markdown features that I had wanted to use in my first project. Using those made this second project a lot easier. I was a lot more comfortable drafting the post and referring to each source. I'm beginning to see how the process itself could become more natural and get in the way less.
I still feel pretty curious about Elizabeth's actual workflow during note-taking and synthesis though. She described it at a high level in her post, but I'm more interested in the nitty-gritty at this point. What does she make a tag, and why? How does she manage her tags? Does she really actually use that many of them?