Sal Khan, CEO of Khan Academy, gave an exciting TED talk last spring in which he predicted that AI chatbots would soon revolutionize education.
“We are on the cusp of using AI for probably the biggest positive transformation education has ever seen,” said Khan, whose nonprofit education group has provided online lessons for millions of students. “And the way we're going to do it is by giving every student on the planet an amazing, but artificially intelligent, personal tutor.”
Videos of Mr. Khan's tutoring robot talk racked up millions of views. Soon, prominent tech executives, including Google CEO Sundar Pichai, began issuing similar education predictions.
Khan's vision for tutoring robots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly personalize lessons for each student. Proponents argue that the development of such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children more quickly and efficiently than human teachers could.
In pursuit of such ideals, tech companies and philanthropists over the years have urged schools to buy a laptop for every child, championed video tutorial platforms, and funded learning apps that personalize learning lessons. the students. Some online literacy and mathematics interventions have reported positive effects. But many educational technology efforts have not been shown to significantly close academic achievement gaps or improve student outcomes, such as high school graduation rates.
Now, the spread of generative AI tools like ChatGPT, which can provide answers to biology questions and produce human-like book reports, is renewing enthusiasm for automated instruction, even as critics warn there is still no evidence that support the idea that tutoring robots will transform education for the better.
Online learning platforms like Khan Academy and Duolingo have introduced GPT-4-based AI chatbot tutors. It is a large language model, developed by OpenAI, which is based on huge text databases and can generate responses to user prompts.
And some technology executives predict that, over time, bot teachers will be able to respond to and inspire individual students just like beloved human teachers.
“Imagine if you could provide that kind of teacher to every student 24/7, whenever they want, for free,” Greg Brockman, president of OpenAI, said last summer in a podcast episode “Possible.” (The podcast is co-hosted by Reid Hoffman, an early investor in OpenAI.) “It's still a little bit sci-fi,” Brockman added, “but it's a lot less sci-fi than it used to be.”
The White House seems sold. In a recent executive order on artificial intelligence, President Biden directed the government to “shape AI's potential to transform education.” creating resources to support educators implementing AI-based educational tools, such as personalized tutoring in schools,” according to a White House fact sheet.
Still, some education researchers say schools should be wary of the hype around AI-assisted instruction.
For one thing, they point out, AI chatbots make things up liberally and could provide students with false information. Making AI tools a pillar of education could turn unreliable sources into authorities in the classroom. Critics also say that AI systems can be biased and often opaque, preventing teachers and students from understanding exactly how chatbots devise their responses.
In fact, generative AI tools can actually have harmful or “degenerative” effects on student learning, said Ben Williamson, provost's fellow at the University of Edinburgh's Center for Digital Education Research.
“There is a rush to proclaim the authority and usefulness of these types of chatbot interfaces and the underlying language models that power them,” Dr. Williamson said. “But there is still no evidence that AI chatbots can produce those effects.”
Another concern: The hype about unproven AI chatbot tutors could detract from more traditional, human-centered interventions (such as universal access to preschool) that have been shown to increase student graduation rates and college attendance.
There are also privacy and intellectual property issues. Many large language models are trained on vast databases of text scraped from the Internet, without compensating the creators. This could be a problem for unionized teachers concerned about fair labor compensation. (The New York Times recently sued OpenAI and Microsoft over this issue.)
There are also concerns that some AI companies could use the materials that educators provide or the comments that students make for their own commercial purposes, such as improving their chatbots.
Randi Weingarten, president of the American Federation of Teachers, which has more than 1.7 million members, said her union was working with Congress on regulation to help ensure AI tools were fair and safe.
“Educators use educational technology every day and want to have more of a say in how technology is implemented in classrooms,” Ms. Weingarten said. “The goal here is to promote the potential of AI and protect against serious risks.”
This is not the first time education reformers have advocated for automated teaching tools. In the 1960s, proponents predicted that mechanical and electronic devices called “teaching machines,” which were programmed to ask students questions about topics such as spelling or mathematics, would revolutionize education.
Popular Mechanics captured the zeitgeist in an October 1961 article titled, “Will Robots Teach Their Children?” It described “a series of experimental machine learning” that spread to schools across the United States in which students worked independently, entering answers into devices at their own pace.
The article also warned that the new machines raised some “profound” questions for educators and children. Would she become the teacher, the article asked, “just a glorified babysitter”? And: “What effect does machine learning have on students' critical thinking?”
Cumbersome and didactic, teaching machines turned out to be a short-term sensation in classrooms, both overrated and feared. The launch of new AI teaching robots has followed a similar narrative of potential transformation and damage to education.
However, unlike the old teaching machines of the 20th century, AI chatbots seem to improvise. They generate instant responses to individual students in conversational language. That means they can be fun, engaging, and engaging.
Some enthusiasts imagine that AI tutoring robots will become study companions whom students could consult calmly and without embarrassment. If schools were to widely adopt such tools, they could profoundly alter the way children learn.
That has inspired some former executives of big technology companies to dedicate themselves to education. Jerome Pesenti, former VP of AI at Meta, recently founded a tutoring service called Sizzle AI. The app's AI chatbot uses a multiple-choice format to help students solve math and science questions.
And Jared Grusd, former chief strategy officer at social media company Snap, co-founded a writing startup called Ethiqly. The app's AI chatbot can help students organize and structure essays, as well as provide feedback on their writing.
Khan is one of the most visible advocates of tutoring robots. Khan Academy last year introduced an AI chatbot called Khanmigo specifically for school use. It is designed to help students think about problems in math and other subjects, not to do their homework for them.
The system also stores the conversations that students have with Khanmigo so that teachers can review them. And the site clearly warns users: “Khanmigo sometimes makes mistakes.” Schools in Indiana, New Jersey and other states are piloting the tutor chatbot.
Khan's vision for tutoring robots can be traced back in part to popular science fiction books like “Diamond Age,” a cyberpunk novel by Neal Stephenson. In that novel, an imaginary tablet-like device is able to teach a young orphan girl exactly what she needs to know at exactly the right moment, in part because she can instantly analyze her voice, her facial expression, and her surroundings.
Khan predicted that within five years or so, tutoring robots like Khanmigo could do something similar, with privacy and security barriers in place.
“The AI will simply be able to look at the student's facial expression and say, 'Hey, I think you're a little distracted right now.' Let's focus on this,'” Khan said.