How might we learn? by Andy Matuschak

Highlights

Prompts

Your ideal learning environment

The most important question to ask about how one learns.
What do I want learning to be like for myself?

Two characteristics of high-growth periods when learning.

  1. Learning that felt like play.
  2. Learning that felt transformational.

What is implicit learning?
Learning driven by emotion and meaning, i.e. what's important and useful to the learner.

What is guided learning?
Learning that considers how to effectively manage cognitive load.

How can you integrate implicit learning and guided learning?
Chose learning projects that are meaningful to you but find cognitive support to learn effectively.

How can one use AI to integrate implicit learning and guided learning?
The AI can use an individual's personal context as a layer to synthesize foundational knowledge of a given field.

Designing an ideal learning environment

Engage with the text by using questions.

Your ideal learning environment

what do you want learning to be like--for yourself? If you could snap your fingers and drop yourself into a perfect learning environment, what's your ideal?

what were the most rewarding high-growth periods of your life?

first, people will tell me about a time when they learned a lot, but learning wasn't the point.

secondly: in these stories, learning really worked. People emerged feeling transformed, newly capable, filled with insight and understanding that has stayed with them years later.

age-old conflict among educators and learning scientists--between implicit learning (also called discovery learning, inquiry learning, or situated learning), and guided learning (often represented by cognitive psychologists).

Advocates of implicit learning methods argue that we should prioritize discovery, motivation, authentic involvement, and being situated in a community of practice. In the opposing camp, cognitive psychologists argue that you really do need to pay attention to architecture of cognition, long-term memory, procedural fluency, and to scaffold appropriately for cognitive load.

Implicit learning aptly recognizes meaning and emotion, but ignores the often decisive constraints of cognition--what we usually need to make "learning actually work". Guided learning advocates are focused on making learning work, and they sometimes succeed, but usually by sacrificing the purposeful sense of immersion we love about those rewarding high-growth periods.

One obvious approach is to try to compromise. Project-based learning is a good representation of that.

we should take both views seriously, and find a way to synthesize the two. You really do want to make doing-the-thing the primary activity. But the realities of cognitive psychology mean that in many cases, you really do need explicit guidance, scaffolding, practice, and attention to memory support.

You want to just dive in, and you want learning to actually work. To make that happen, we need to infuse your authentic projects with guided support, where necessary, inspired by the best ideas from cognitive psychology. And if there's something that requires more focused, explicit learning experiences, you want those experiences to be utterly in service to your actual aims.

immerse themselves, as much as possible, in what they're actually trying to do, but get the support they need to understand what they're doing.

Demo, part 1: Tractable immersion

our AI--and let's assume it's a local AI--can build up a huge amount of context about their background. From old documents on Sam's hard drive, our AI knows all about their university coursework. It can see their current skills through work projects. It knows something about Sam's interests through their browsing history.

Demo, part 2: Guidance in action

Tools like Copilot help Sam get started, but to follow some of these signal processing steps, what Sam really needs here is something like Copilot, but with awareness of the paper in addition to the code, and with context about what Sam's trying to do.

This AI system isn't trapped in its own chatbox, or in the sidebar of one application. It can see what's going on across multiple applications, and it can propose actions across multiple applications. Sam can click that button to view a changeset with the potential implementation. Then they can continue the conversation, smoothly switching into the context of the code editor.

The AI underlines assumptions made based on specific information, turning them into links.

All this is to support our central aim--that Sam can immerse themselves, as much as possible, in what they're actually trying to do, but get the support they need to understand what they're doing.

Demo, part 3: Synthesized dynamic media

support doesn't have to just mean text.

guidance includes synthesized dynamic media

These dynamic media aren't trapped in the chatbox. They're using the same input data and libraries Sam's using in their notebook. At any time, Sam can just "view source" to tinker with this figure, or to use some of its code in their own notebook.

Demo, part 4: Contextualized study

the high-level explanations they can get from these short chat interactions really just don't feel like enough.

A chat interface is just not a great medium for long-from conceptual explanation. It's time for something deeper.

The AI knows Sam's background and aims here, so it suggests an undergraduate text with a practical focus. More importantly, the AI reassures Sam that they don't necessarily need to read this entire thousand-page book right now. It focuses on Sam's goal here and suggests a range of accessible paths that Sam can choose according to how deeply they'd like to understand these filters. It's made a personal map in the book's table of contents.

When Sam digs into the book, they'll find notes from the AI at the start of each section, and scattered throughout, which ground the material in Sam's context, Sam's project, Sam's purpose.

spending some time away from their project, in a more traditionally instructional setting, but that doesn't mean the experience has to lose its connection to their authentic practice.

I've heard some technologists suggest that we should use AI to synthesize the whole book, bespoke for each person. But I think there's a huge amount of value in shared canonical artifacts--in any given field, there are key texts that everyone can refer to, and they form common ground for the culture. I think we can preserve those by layering personalized context as a lens on top of texts like this.

In my ideal future, of course, our canonical shared artifacts are dynamic media, not digital representations of dead trees. But until all of our canonical works are rewritten, as a transitional measure, we can at least wave our hands and imagine that our AI could synthesize dynamic media versions of figures

as Sam reads through the book, they can continue to engage with the text by asking questions, and our AI's responses will continue to be grounded in their project.

As Sam highlights the text or makes comments about details which seem particularly important or surprising, those annotations won't end up trapped in this PDF: they'll feed into future discussion and practice

In addition to Sam asking questions of the AI, the AI can insert questions for Sam to consider--again, grounded in their project--to promote deeper processing of the material.

And just as our AI guided Sam to the right sections of this thousand page book, it could point out which exercises might be most valuable, considering both Sam's background and their aims. And it can connect the exercises to Sam's aims so that, aspirationally, doing those problems feels continuous with Sam's authentic practice. Even if the exercises do still feel somewhat decontextualized, Sam can at least feel more confident that the work is going to help them do what they want to do.

Interlude: Practice and memory

why do we sometimes remember conceptual material, and sometimes not?

If you're learning something new in a domain you know well, each new fact connects to lots of prior knowledge. That creates more cues for recall and more opportunities for reinforcement.

And if you're in some setting where you need that knowledge every single day, you'll find that your memory becomes reliable pretty quickly.

Conceptual material like what Sam's learning doesn't usually get reinforced every day like that. But sometimes the world conspires to give those memories the reinforcement they need.

Each time you reinforce the memory this way, you forget it more slowly. Now perhaps a week can go by, and you're still likely to remember. Then maybe a few weeks, and a few months, and so on. With a surprisingly small number of retrievals, if they're placed close enough to each other to avoid forgetting, you can retain that knowledge for months or years.

The key insight here is that it's possible to arrange the top timeline for yourself.

immersive learning--and for that matter most learning--usually doesn't arrange this properly, so you usually forget a lot.

What if this kind of reinforcement were woven into the grain of the learning medium?

students often use tools called spaced repetition memory systems to remember vocabulary and basic facts. But the same cognitive mechanisms should work for more complex conceptual knowledge as well.

a new medium--a mnemonic medium--integrating a spaced repetition memory system with an explanatory text to make it easier for people to absorb complex material reliably.

A vessel of practice

In my personal practice, I've accumulated thousands and thousands of questions. I write questions about scientific papers, about conversations, about lectures, about memorable meals. All this makes my daily life more rewarding, because I know that if I invest my attention in something, I will internalize it indefinitely.

Central to this is the idea of a daily ritual, a vessel for practice. Like meditation and exercise, I spend about ten minutes a day using my memory system. Because these exponential schedules are very efficient, those ten minutes are enough to maintain my memory for thousands of questions, and to allow me to add up to about forty new questions each day.

Some Problems

One is pattern matching: once a question comes up a few times, I may recognize the text of the question without really thinking about it. This creates the unpleasant feeling of parroting, but I suspect it often leaves my memory brittle: I'll remember the answer, but only when cued exactly as I've practiced. I wish the questions had more variability.

the questions are necessarily somewhat abstract. When I face a real problem in that domain, I won't always recognize what knowledge I should use, or how to adapt it to the situation. A cognitive scientist would say I need to acquire schemas.

Unless I intervene, questions stay the same over years. They're maintaining memory--but ideally, they would push for further processing--more depth over time.

memory systems are often too disconnected from my authentic practice.

Unless I'm careful, those questions probably won't feel very connected to my project--they'll feel like generic textbook questions

Demo, part 5: Dynamic practice

They install a home screen widget, which ambiently exposes them to practice prompts drawn from highlights, questions asked, and other activity the AI can access.

this isn't a generic signal processing question: it's grounded in the details of Sam's BCI project, so that, aspirationally, practice feels more continuous with authentic doing.

These synthesized prompts can vary each time they're asked, so that Sam gets practice accessing the same idea from different angles. The prompts get deeper and more complex over time, as Sam gets more confident with the material. Notice also that this question isn't so abstract: it's really about applying what Sam's learned, in a bite-sized form factor.

The widget can also include open-ended discussion questions.

When questions are synthesized like this, it's important that Sam can steer them with feedback. Future questions will be synthesized accordingly.

if they make time for a longer dedicated session, we can suggest meatier tasks, like this one. What's more, we can move that work out of fake-practice-land and into Sam's real context

Demo, part 6: Social connection

Just as our AI can help Sam find a tractable way into this space, it can also facilitate connections to communities of practice

Our AI can notice these moments and help Sam metabolize them. Here that insight turns into a reflective practice prompt.

Design principles

First, we bring guided learning to authentic contexts, rather than thinking of it as a separate activity.

We're able to make that happen by imagining an AI which can perceive and act across applications on Sam's computer. And as the audio transcript at the end demonstrated, that can extend to activities outside the computer too. This AI can give appropriate guidance in part because--with permission and executing locally--it can learn from every piece of text that's ever crossed Sam's screen, every action they've taken on the computer. It can synthesize scaffolded dynamic media, so that Sam can learn by doing, but with guidance.

Then, when explicit learning activities are necessary, we suffuse them with authentic context.

The AI grounds all the reading and practice Sam's doing in their actual aims. It helps Sam match the learning activities to their depth of interest. It draws on important moments that happen when Sam is doing

Besides connecting these two domains, we can also strengthen each of them.

Our AI suggest tractable ways for Sam to "just dive in" to a new interest, and helped Sam build connections with a community of practice.

Finally, when we're spending time in explicit learning activities, let's make sure that learning actually works. Our AI creates a dynamic vessel for ongoing reinforcement it varies over time so that the knowledge transfers well to real situations. And it doesn't just maintain memory--but increases depth of understanding over time.

Two cheers for chatbot tutors

if the user's trying to perform a routine task, chatbot tutors can often diagnose problems and find ways to get the user unstuck.

In large part, I think that's because the authors of these visions are usually thinking about educating (something they want to do to others) rather than learning (something they want for themselves).

Chatbot tutors aren't interested in what I'm trying to do; there's a set of things they think I should know or should be able to do, and they view me as defective until I say the right things.

If I hire a real tutor, I might ask them to sit beside me as I try to actually do something involving the material. They can see everything I'm doing, see what I'm pointing at. If it's appropriate, I can scoot over, and they can drive for a minute. By comparison, the typical conception of a chatbot tutor lives in a windowless box, can only see whatever's provided on scraps of paper passed under the door, and can have no effect on the outside world.

My goal is to dive in, to immerse myself, to start doing the thing. But these chatbot tutors can't join me where the real action is. So interactions with them create distance, pull me away from immersion.

If I hire a real tutor, we'll build a relationship. With every session, they'll learn more about me--my interests, my strengths, my confusions.

that relationship is also important to my emotional engagement.

If I view conversation with my tutor as a kind of peripheral participation in the community I'm hoping to enter--an interaction between novice in the discipline and mentor in the discipline--then tutoring will become just part of doing the thing. But if my interaction with my tutor is transactional, that will tend to make my tutoring sessions feel like "learning time", separate from doing the thing.

he's modeling the practices and values of an earnest, intellectually engaged adult. He's demonstrating how and why he thinks about problems. His taste in the discipline.

The high-growth periods we love transform the way we see the world. They reshape our identity.

But in my ideal world, I don't want a tutor; I want to legitimately participate in some new discipline, and to learn what I need as much as possible from interaction with real practitioners.

I view the role of the augmented learning system as helping me act on my creative interests, ideally by letting me just dive in and start doing, as much as possible. That will often mean scaffolding connections to and interactions with communities of practice.

A note on ethics

within the narrower domain of learning, my main moral concern is that we'll end up trapped on a sad, narrow path.

A condescending, authoritarian frame dominates the narrative in the future of learning.

I'll caricature it to make the point: with AI, we can take all these defective kids that don't know the stuff they're supposed to know, and_ finally get them to know it_! You know: personalized learning! The AI will let us precisely identify where the kids are wrong, or ignorant, and fix them. Then we can fill their heads to the brim with what's good for them.

The famous "bicycle for the mind" metaphor is better because it has no agenda other than the one you bring.

And it makes the journey fun too

those most rewarding high-growth experiences are often centered on a creative project. You're trying to get somewhere no one's ever gone before--to reach the frontier, then start charting links into the unknown. Learning in service of creation. It's a dynamic, context-laden kind of learning. It's about more than just efficiency and correctness.

Questions

  1. How do you decide what questions you want to add? And how do you plan for your future self?

Ten minutes of practice could support about 40 questions to be added.

In general, you can't plan what is important to you. So, your system need to be resilient to this fact.

Don't plan. Do stuff and stuff will get reinforced. Then stir (more of this, less of that).

  1. On schooling

We rely on school to tell us what it is I need to know and how I will know it. But school is not only the institution that could help us on these goals.

  1. Will there be a time when we won't be working anymore due to AI, robotics, computers, etc.?

You can't let an AI write for you because you don't know what you want. You learn what you want to write through the process of writing.

We learn what we want through the process of creating it.

  1. On schooling again

An enormous portion of the student body are fundamentally not engaged with the class. No amount of AI is going to change that. He doesn't want to put himself in that "problem-solving" situation. In some sense, that's why he left Khan Academy.

  1. Have you seen inklings of your ideas starting to play out or is this purely speculative?

People are already using GPT to just "dive" into stuff. It's still missing a lot, but it is already delivering some value.

When people use GPT, they gain momentum, which is important when learning. But then they hit a wall.

  1. To what extent does the desire to learn comes from the innate pleasure vs to some use of the material the person learns? Is making this distinction important in designing systems for learning?

Probably. Curiosity-based learning could still apply principles more noticeable in project-based learning. But authentic practice and legitimate participation just looks different.

  1. Have you thought about adversarial tutoring? If the schooling industrial complex produces AI tutoring, how do we counteract them?

I'm uncomfortable with an activist framing that says I need to get out there because I know what's best for the people. They're doing it wrong and I'll do something to make it right. I'm more comfortable with: I'll make a thing and let's pursue the things were are interested in because it feels less complicated.

  1. Is there a limit to disciplines that the AI tutor could be applied to?

It's difficult to think about what an authentic practice of "streetsmarts" look like.

References

Matuschak, Andy. How Might We Learn? May 2024. andymatuschak.org, https://andymatuschak.org/hmwl.