Yet more BS about KA….

Everybody loves a good bandwagon, and the “love” being showered on Khan Academy is one that initially elicited a shrug from me. Of course, when Bill Gates tells us that Khan is the “best teacher he’s ever seen” (and if he’s so in love with him, why hasn’t he endowed a position for him to work at the exclusive private school that his children attend? Do you think that for the $25,000 annual tuition, those teachers are showing Khan videos in their classrooms, or following the Common Core State Standards that Gates funded?)

Enough about that: let’s get to the meat of the matter – last Friday, which we all know is where breaking news goes to die, a report was released by an outfit known as the “SRI Foundation,” whose board of directors looks like the membership committee at some exclusive country club. The report was sponsored by, wait for it….. Bill & Melinda Gates (I can imagine what balancing a checkbook at the Gates’ household must be like: “Honey, did you write a check for $2,000,000 to the SRI Foundation last month? Do you remember what it was for?” )

There’s nothing surprising in the report: they surveyed a small group of schools who volunteered to take part in the study. Okay, strike one: if your school volunteered to be part of a study, does that truly mean it was a randomized sample? Let’s move on…

Here’s how the report describes the scope of the study, which surveyed 2,000 students each year and encompassed a variety of charter, independent and public schools (I wonder if Lakeside was included in this survey?) Overall, it was 9 “sites” (which I assume to be school districts?), 20 schools and 70 teachers.

One site, a public elementary school district, had the largest level of participation, involving 8 schools, more than 50 teachers, and over a 1000 students. In the other sites, participation ranged from a single school and teacher, to two to three schools and five to six teachers.

Okay, so it’s statistically small and not truly randomized. What’s not to like?

Of course, the authors of the report were realistic about what could be accomplished during this study:

For these reasons, it was methodologically unsound to conduct a rigorous evaluation of Khan Academy’s impact on learning during the study period, including any use of randomized control trials, which would have required Khan Academy tools and resources to remain unchanged during the study and for teachers and students to use Khan Academy the same way. Moreover, at all but one of the sites, Khan Academy was principally used as a supplementary tool—not as the core primary curriculum—so the effects of Khan Academy cannot be separated from those contributed by other elements of the math curriculum.

I’m sorry, so why are we reading this report again? Let’s figure this out: the authors acknowledge that there is no way to separate how using Khan Academy videos impacted the development of mathematical competence.

So why are we reading this report again? All of which is odd, because later in the report there are some actual “findings” which contradicts what the authors acknowledge would be impossible in the course of the “study.”

At Site 1, we found that fifth graders with better than predicted end-of-year achievement test scores had spent an extra 12 hours over the school year using Khan Academy and that sixth graders exceeding their predicted achievement level had spent an extra 3 hours on Khan Academy, compared to grade-level peers with lower-than-expected end-of-year test scores. Similarly, fifth graders with higher than expected achievement test scores had completed 26 additional problem sets (39% more), and sixth graders with higher than expected achievement had completed 20 additional problem sets (or 22% more).

Pardon my confusion, but didn’t the authors of the report say that there was no way to factor out Khan’s contribution to student achievement? Yet here we have an implied correlation between viewing Khan videos and “higher than expected achievement” among fifth graders. And why are we reading this report?

I enjoyed reading the other observations in the report. Among the findings was the level of “engagement” “observed” during something called “Khan Time.” I’m trying to imagine what “Khan Time” must be like in the classroom: it’s when the teacher stops lecturing for 5 minutes and shows the kids a Khan video. The findings report that the observers saw that 62% of the students were “moderately engaged” and 25% were “highly engaged.” What could that mean? From what I would imagine, the “moderately engaged” students stopped looking at their cellphones long enough to say, “look, teacher is showing YouTube in class!” while the “highly engaged” ones actually stopped Instagramming a “selfie.” 

Seriously, the methodology on this study is so ridiculous, it’s a wonder SRI didn’t release this on a Saturday night, when it would have been DOA. Here’s another neat finding from the study:

Overall 71% of students reported that they enjoyed using Khan Academy, and 32% agreed they liked math more since they started using Khan Academy.

Yes, they “enjoyed’ using Khan Academy; versus what? Perhaps that 71% attends classes where the teacher lectures for 20 minutes and the kids do “seatwork” for the rest of the class. Well, then I could see how those students enjoyed using Khan Academy, if only to hear a different voice. What about the 32% who said they like math “more” since they started with Khan: okay, 32% sounds like a good stat, but compared to what? I know a way to have 100% students enjoy math more: give them a cupcake at the conclusion of each math class. 

What’s even more lame about this study is that it doesn’t compare Khan videos with other methods of inspiring and motivating students. Did they compare it to training the teachers to use project based learning in the classroom? How about watching math videos by someone else besides Khan Academy. It may be that any video content would work, and that the results might actually be better than Khan. Suppose we showed these students better quality videos? Perhaps 100% would “enjoy math more”?

Finally, it should be noted that much of this was based on “self reporting,” that is, the students and teachers fill out surveys that described their habits and attitudes. We all know how unreliable self-reporting can be, but why go on?

In the end, what do we have? A cruddy report that will become a self-promoting tool for a model of math instruction that was antiquated even before it was declared revolutionary. Seriously, can’t Bill & Melinda find something useful to do with their fortune than pay off a bunch of number crunchers to advocate for an outdated and regressive model of mathematical instruction that is “better” when compared to nothing.

About rmberkman

This blog is the sole musings of one Robert M. Berkman, an educator who has taught math, science and technology for the past 30 years in New York. You can react to all his posts by emailing him at rants@bltm.com.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply