Sramana Mitra: Give me a use case of what AI produces and what your human teachers do on top of it?
Raj Valli: I can take an example here. I’m going to take a much easier case for this conversation seminar. Let’s say that someone comes in and takes their first diagnostic assessment. You don’t know anything about that student and the student comes in and takes a third-grade or a fourth-grade diagnostic test.
Diagnostic tests have a huge science behind it. People are doing extraordinary deep research on what the diagnostic test is. A perfect diagnostic test is a very good dipstick experiment. All it means is that by somehow giving you a set of questions that is potentially a sampling of the data set that I need to question you on, I can paint a picture of what you know and what you don’t know.
A really good diagnostic test that I should give would probably have 300 questions. That will take you four hours to complete and it’s unworkable in reality because nobody wants to take four hours to answer those questions.
The extreme on that is just to ask you one question which is not going to give me any indication. The sweet spot is asking you 15 or 20 questions that will start to form the basis of what you know and what you don’t know.
We have this notion called mastery matrix or proficiency matrix. In a nutshell, think of a scrabble board or a chessboard. Paint these squares to be yellow, red, or green based on how well you performed in a particular question in the diagnostic test.
We can go multiple ways in this. It’s like starting from a single point, then I can go in any direction and take that student in any direction. The AI is based on the way it’s interpreting. It’s almost like a brain fingerprint of what your current state of knowledge is at that particular point, which varies from student to student.
There are a bunch of squares that can be lit in different colors as you can imagine, so the AI will come back and determine based on the historical performance of other similar students who have had similar fingerprints and make a recommendation that this student should start with multiplication tables or fractions.
Based on that interpretation, you can also determine how many assignments you assign to the student. We have to take a little bit of human being involvement here. We don’t just put students into a machine and just blindly run with the recommendation.
As they are onboarding the student, the tutor would have spoken to the parents. That said, those are the reasons why the parent was motivated to make the student join Thinkster in the first place. You will ask them what motivated you and they’ll come back and say, “Well, my kid struggles with fraction problems.” There are a myriad of decisions that need to be involved.
AI is making recommendations saying that this should probably be the recommended set of 10 assignments that we want the student to be working on. That’s one input and if we don’t run with that input we take that as one vector that will influence our output.
The second vector that we include there is the parent’s input. For example, the parent wanted to make sure that the kid will start getting coached in these particular areas of weakness that the parent already has identified.
We want to make sure that we are giving an opportunity for the parent to understand how the program is going to work for the student to start showing some improvement in this particular area. We will add that as an influencing layer so that we the tutor can reject recommendations from the machine and add a few more assignments that are more customized to that particular family’s requirements at that point in time.
There are a couple more that we add. It’s a one-two-step process. We make an assignment and we are offering the student a particular worksheet that they need to work on. One can argue that the performance objective, recommendation, or any set of the assignments is only as good as the student’s ability to feel like they got engaged.
If the student continues to get 100 in all 10 assignments, I would probably come back and tell you that’s probably a very bad assignment sequence because it looks like the student aced everything, so there was nothing that the student needed to learn. We didn’t make the students work on things that they needed to start to learn.
I just wanted to give a very simple example. We can spend a lot of time peeling the layers in going much deeper but this is where I don’t trust machines by themselves, because machines need a lot of contextual logic.
This is no different from playing a chess game. It’s got so many variables that if anyone tells me that they’re going to be relying on 100% AI, I wouldn’t trust them from an educational context.