Fractal Learning

Posted on Sat 10 October 2020 in learning • 10 min read

tree

Part 1 - Complexity

This world is a complex, interconnected web of systems. We've tried to make sense of this by creating various (seemingly-unrelated) disciplines. With the huge number of disciplines, I think you'll agree with me when I say that it's impossible for any one person to be an expert in all of these fields.

It's turtles all the way down, however, and even as you go deeper into a field, there's still so much complexity that you would despair at the idea of ever really understanding anything even within the confines of one specific field.

Most fields seem to exhibit a fractal pattern. By this, I mean that the more you try to zoom in onto a specific aspect of a system, the more detail is revealed. The same is true whether you're talking about economics or physics or biology. A whole new world of detail is revealed at all the different levels.

Of course, when confronted with complexity, our first instinct is to reduce and to treat these in isolation, and we've tried to come to terms with the complexity within fields by creating specializations, which are essentially sub-fields.

This is why a microbiologist seems to speak a completely different language from an ecologist. This might make you wonder whether the two fields are even related; one could argue that they both view the world through a radically different set of lenses.

As human knowledge progresses and we're able to understand systems at different levels, we're continuously spawning newer and newer specializations and hyperspecializations.

Nowhere is this concept clearer than in computer science where incredibly complex systems have been built to deliver cat videos to your screen. There's nothing special about my choice of computer science here, as I'm sure there are similar levels of complexity lurking beneath the surface of any domain.

You could spend years and years in deep study and you still wouldn't really fully get how a computer worked. There would always be gaps in your knowledge. Even behind something as seemingly simple as allowing you to read these words on your screen, there is so much hidden complexity. There are towers and towers of abstractions that enable this to happen:

  1. Your CPU (essentially a tiny piece of silicon that can't do much but add two numbers together) executed a few million instructions during the time you were reading this sentence. Welcome to Computer Architecture!
  2. These instructions are in most cases not understandable by a human. So we invented a high-level language and wrote another software to convert this high level language into the instructions. Welcome to Compilers!
  3. There are multiple applications running on this system at the same time; somehow, a piece of code (the Operating System), managed to abstract all this away such that the applications are able to pretend that they're the only application running on the system. Similarly, there are other resources (RAM, files, I/O devices) that all need to be shared between the hundreds of programs running on your system. Welcome to Operating Systems!
  4. Your device doesn't exist in isolation. In fact, a lot of its capabilities arise in relation with other devices. Just for the purpose of loading this website and viewing it, it had to send signals out into the ether which somehow (like a labelled envelope) found its way to the right servers and then they responded back with the data you were requesting. Welcome to Networks!

In fact, all of the cases above are huge simplifications. I haven't even begun scratching the surface. If you're curious to learn more, look at how much detail is hidden behind a simple Google search.

So how do you even begin to understand and make sense of things which seem to have so many interconnected pieces?

A short digression here: there is a famous result in psychology about working memory (also called Miller's Law). It suggests that humans can, on average, hold about 7 objects in their short-term memory at one time. Basically, if you're playing around with concepts in your head and thinking about how things relate with one another, there seems to be a cognitive limit of about 7 items.

Of course that is quite fuzzy and it is of course dangerous to generalize too much from any one result. However, I'd suggest that we can take away this lesson from Miller's Law: humans can't hold too many things in their head at the same time. Software systems are very complex beasts and it's beyond the scope of any person to hold in their head all the minute details of how something is working.

This is where Abstraction comes in. This is one of the fundamental building blocks of Computer Science, Engineering, and Problem Solving in general. Abstraction is when you squint your eyes and treat something as a black box. You are temporarily choosing to not care about what's happening inside the black box because other details are more important.

For an example of abstraction think about interfaces to objects you commonly use. A car for example hides a lot of complicated circuitry and machinery, but at the end of the day, all you need to care about is the steering wheel and a couple of pedals. That's the abstract view of a car. I don't care how that car turns these inputs into the multiple complicated outputs of fuel intake, torque etc. I completely ignore that because it's not important to me. I just want to get from point A to point B and I just need to know how to use the interface of the car to achieve the goal.

A well-designed interface (this is true of software interfaces too!) would allow you to focus on fewer aspects of the car and expend lesser cognitive effort when driving the car; I'm talking about manual vs automatic vs self-driving cars.

Sure, that's all well and good, but this article is about Dealing with Complexity, not about Pretending it doesn't exist. After all, someone does ultimately have to design and work with the underlying complexity (Tesler's Law). You'd be a terrible automobile engineer if you were only able to see a car as its interface!

Things are even harder when you're a student and trying to learn more about a certain field you have no prior knowledge in; with so much to learn, how do you decide what is worth digging deeper into?

So hold on to this idea of abstraction, and let's talk about Fractal Learning, which is the focus of this blog post.

Part 2 - Fractal Learning

Fractal learning is essentially a strategy used to make sense of complex systems without getting too lost in the details. It's about being in that Goldilocks zone of not wasting your time learning too much (you have other things to do) while at the same time getting an overall understanding of how things fit together.

I first came across this concept in the excellent IntermezzOS Book. It beautifully encapsulates the challenges of students and researchers working to make sense of complex systems:

It's impossible to learn everything at once. If you keep digging, you'll find more questions, and digging into those questions leads to more questions... at some point, you have to say "okay, I know enough about this for now, it's time to move on."

Here's that picture of a tree again:

tree

Let's imagine that this tree represents your field of interest. Your job is to somehow navigate this tree (or a branch) and learn everything. Now, clearly, learning everything isn't going to happen in this lifetime, but you still want to know enough to get a good idea of how everything fits together.

How do you do it?

Most people seem to follow one of two strategies - and these strategies come under the umbrella of tree-traversal algorithms in computer science.

Let's dig a bit deeper into both of these as they are relevant to the discussion.

The first is depth-first search (I call it falling down the rabbit hole). It's represented by this animation1:

dfs

Here, you start at the root of the tree (represented by 1) and you keep going deeper and deeper along any one path. You stop when you can go no further and then try one of the other paths.

To better visualize it, here is the same algorithm working to solve a maze2:

dfs

I hope you get why I called it falling down the rabbit hole; you're just digging deeper and deeper into a specific topic without really taking a step back to explore other related topics. This is a laser-focused strategy that focuses on exclusion. It's the equivalent of the child who keeps asking "Why?" for a specific topic until she can no longer get a meaningful answer. If someone followed only this strategy for learning about things, she would have very detailed and specific knowledge about that one thing but have absolutely no idea about even closely related things.

Let's talk about the other strategy - breadth-first search (I call it flooding):

dfs

This is the polar opposite of the previous approach. Here you are quite timid, and you never go more than one level at a time, and you make your way level-by-level.

Here it is again, within the maze context3:

dfs

Again, I hope it makes sense why I called it flooding; it works similarly to how water slowly rises to flood an area. Here you dig very shallowly and take the time to understand the basics of everything before making your way deeper. This is focused on being inclusive and getting a sense of how everything fits together, even if it's not relevant. Someone who followed only this strategy would be a jack of all trades, he would know the fundamentals of everything; but like the figure of speech goes, he would be a master of none.

So which should you follow: breadth-first or depth-first?

Sadly, as with a lot of things in life, the answer is: it depends. You will have to use your judgment and intuition to decide which approach to use depending on the situation. Both approaches have their trade-offs.

Of course, given enough time, both approaches would cover the entire tree. As we saw, both approaches managed to solve the maze.

But we don't really have much time, do we?

In the following examples (both of which are real-world scenarios), the above strategies are sub-optimal. You will have to use a combination of both, carefully using your judgment to pick which topics are worth exploring deeper, and which are only worth skimming through.

When you are new to a field and trying to get an overall sense of how the field works, it might be a waste of time to fall down any rabbit holes as you don't have the experience to know which lines of questioning are relevant. So a breadth-centric approach might serve you well. In this case, you should form appropriate abstractions for the concepts you are encountering so that you can dig deeper into them later if you need.

An example of this would be reading only the Wikipedia introductions for the topics you encounter while skimming through the rest of the article, while clicking through the links in the introduction for further exploration. Here, it's important to still pay some attention to the details so you can return to it later.

On the other hand, let's say you are trying to find a solution to a specific problem that you are trying to solve. Here it might be wiser to follow a more depth-centric approach, using your intuition to guide you down relevant lines of questioning. Note that it's much better to already have an overall sense of how things work; otherwise, you wouldn't know where to start, and it would just be an undirected (and probably fruitless) search. Here, it would just be a distraction to think too much about the things you are encountering. It may save you some time to skim through the material and get an overall sense of how things work, in case you need it later. However, it is essential to stay on track and not lose sight of the specific thing you are trying to solve at that moment.

An example of this would be trying to find a specific piece of information on Wikipedia. You would have a sort of tunnel vision: You know exactly what you want, and everything else is a mere distraction. Here, it is important to not get too hyper-focused and pay some attention to the overall context as well. Then, your understanding of the entire field grows, not just the specific part you are dealing with.

Ultimately these two examples really serve to demonstrate what the topic of this blog post is about: Fractal learning.

Fractal learning is ultimately about balance and flexibility. It's about:

  • keeping the big picture in your mind even when digging into the details, whilst simultaneously

  • paying close attention to the details even when you are just skimming

  • choosing the appropriate level of abstraction for what you're trying to do, whilst simultaneously

  • zooming in and out of different levels of abstraction.

It means that sometimes you choose to accept things the way things are without questioning deeper; and maybe later, you choose to dig deeper into those same assumptions.

This might seem fundamentally to be quite contradictory. There's almost a very Zen aspect to it as it is not an easy or simple thing to do to make these trade-offs. Failure is inevitable and it is natural and expected that you sometimes you go too deeply into irrelevant details and too shallowly into the important details. But this is definitely a skill that can be learned. In fact, It's not just a skill but a meta-skill that would greatly increase the rate at which you learn all skills.

I have seen a general tendency in myself to dig too deep into a topic early on and get lost in the maze. If you're like me, I hope this article encourages you to pause, take a step back, and think about the overall system every once in a while.

It's okay to not "fully" understand something before moving on, whatever that means. You'll get back to it. Sometimes, learning something else is more important than diving into every last detail.

  1. Source
  2. Source
  3. Source