★What new developments will we need for personalised, relevant one-to-one AI tutoring?
Introduction
In "The 2 Sigma Problem", Bloom suggested that one-to-one tutoring could provide a phenomenal boost to students' learning. Maybe it's not quite as big an effect as he suggested, but I'm still willing to bet that the combination of personalisation, relevance, engagement, adaptive difficulty, and other benefits of one-to-one tutoring do make a huge difference, and at least some of them can be captured by generative AI. But even if we thought we did have a better approach, how could we be sure?
I'll try and tackle both sets of questions.
Developing a teacher model that's personalised and relevant
Let's focus on the question of how we might make a teacher model that provides very personalised, relevant tutoring to the student.
Relevance is rich and hierarchical: the environment, the task or context that the student is learning about right now, this particular course, their previous learning and progress, the history of interactions with the teacher, the language, the country they live in, recent news, macro changes, etc. All of these contextualise our learning.
This kind of rich, hierarchical, many-faceted personalisation presents challenges for a single gigantic, fixed model, such as an LLM.
Maybe we can get away with just prompt engineering, feeding in an enormous dump of data about learner goals, preferences, and previous interactions, and rely on ever-larger context windows and more instructable models.
But I’m pessimistic about whether prompt engineering alone is enough for fine-grained, subtle, creative, optimal relevance and personalisation.
Firstly, there's the challenge of fitting all the necessary background into the model's context window without overwhelming or confusing it.
Secondly, LLMs struggle with prompts that tug against their pretraining. For example, at Rehearsable.ai, it proved intractable to prompt-engineer GPT-4 to reliably and subtly follow a particular negotiation skills approach that flew in the face of the common advice found on the internet.
In the short-run, perhaps we can imagine hierarchies of LORA-style fine-tuning that can be layered on top of one another, for student, exam board, culture, etc. But my bet is that eventually the best AI teachers will learn too, alongside and about their students.
Developing an explicit model of the student
Perhaps we also need to go beyond a single, main teacher model. Could it help to represent the learner with an explicit (separate?) model that assists the primary teacher model? To train such a model of the learner, we could train it to predict the learner’s behaviour, e.g. the answers they are giving, and the questions they are asking. We might then probe this model of the learner, to ask “How would the learner respond if we asked them this question?”, or to look at how it has changed over time to measure “Has the learner’s understanding of Topic X improved?”.
With such a model of the learner’s behaviour, we could run Monte-Carlo Tree Search or similar to simulate the effect of different teacher interventions, and pick the one that we believe will best help improve the learner’s eventual performance. In this way, we can consider relevance as exactly the content that this learner needs right now, in order to pass their particular exam, or indeed to unstick their current confusion. A rich model of the learner could help with choosing or generating particular problems that will help them see how to apply a new concept, develop the skill they’re missing, or correct a misconception. It could involve judicious examples, or analogies, or counter-examples.
So, future AI one-to-one tutoring might involve both custom teacher and custom student models.