Four years ago during the 2020 election, we warned in the Los Angeles Times that young people were struggling to spot disinformation because of outdated lessons on navigating the internet. Today, educators risk making the same mistakes with artificial intelligence. With the election at our doorstep, the stakes couldn’t be higher.
Previous work by our research team, the Digital Inquiry Group (formerly the Stanford History Education Group), showed that young people are easily deceived because they judge online content by how it looks and sounds. That’s an even bigger problem with AI, which makes information feel persuasive even when it fabricates content and ignores context. Educators must show students the limits of AI and teach them the basic skills of internet search for fact-checking what they see.
When it comes to AI, leaders preach “great excitement and appropriate caution,” as Washington state Superintendent Chris Reykdal put it in a recent teachers’ guide. He writes of a “full embrace of AI” that will put that state’s public education system “at the forefront of innovation.” New York City schools former chancellor, David C. Banks, who stepped down amid a federal investigation, said in September that AI can “dramatically affect how we do school” for the better. The “appropriate caution,” however, remains a misty disclaimer.
Washington state’s guidelines, like California’s, Oregon’s, and North Carolina’s, rightly warn that AI may be biased and inaccurate. Washington state stresses that students shouldn’t automatically trust the responses of large language models and should “critically evaluate” responses for bias. But this is like urging students in driver’s education to be cautious without teaching them that they need to signal and check blind spots before passing the car ahead of them.
This pattern repeats the mistakes we saw with instruction on spotting unreliable information online: educators wrongly assuming that students can recognize danger and locate content that’s reliable.
Massachusetts Institute of Technology professor Hal Abelson tells students that if they come across “something that sounds fishy,” they should say, “Well, maybe it’s not true.” But students are in school precisely because they don’t know a lot. They are in the least position to know if something sounds fishy.
Imagine a history student consulting an AI chatbot to probe the Battle of Lexington, as one of us recently tested. The large language model says this conflagration, which launched the American Revolution, was initiated “by an unknown British soldier.” In truth, no one actually knows who fired first. The chatbot also reports that “two or three” British soldiers were killed during the skirmish. Wrong again. None was. Unless you’re a history buff, this information doesn’t sound “fishy.”
A second danger is that AI mimics the tone and cadence of human speech, tapping into an aesthetic of authority. Presenting information with confidence is a trap, but an effective one: Our 2021 national study of 3,446 high school students reveals the extraordinary trust students place in information based on a website’s superficial features.
When students conflate style with substance and lack background knowledge, the last thing they should do is try to figure out if something “sounds fishy.” Instead, the detection of unreliable information and responsible use of AI rests on internet search skills that enable them to fact-check.
Here’s the good news: Studies by our research group and others show that students can become more savvy at evaluating online information. Without delay, educators should focus on AI literacy that emphasizes why content can’t be judged just by looking at it, along with search literacy that gives students the tools to verify information.
On the AI literacy front, educators need to help students understand that large language models can generate misleading information that looks good and pull scientific references out of thin air. Next, they should explain to students how the chatbots work and how their training data are liable to perpetuate bias. When Purdue University researchers showed people how large language models struggled to recognize the faces of brown and Black people, participants not only grasped this point, they also became more skeptical of other AI responses.
Second, teachers need to make sure their students possess basic online search skills. Expert fact-checkers don’t rely on how something “looks.” Students, likewise, need to leave an unfamiliar website and use the internet to fact-check the internet. The same advice applies to AI: Students need to go beyond the seemingly credible tone of a chatbot and seek context by searching the broader web.
Once there, they should take advantage of, yes, Wikipedia, which has become a remarkably accurate resource with safeguards to weed out errors. Having students compare AI responses to Wikipedia entries highlights the difference between artificial and human intelligence. Whereas AI issues a murky smoothie of ambiguously sourced information, Wikipedia requires that claims be anchored to verifiable sources. The site’s Talk page provides a record of debates by real people—not algorithms—over the evidence that supports a claim.
Our studies have shown the danger of taking information at face value. This threat only increases as AI churns out flawed content with encyclopedic authority. And yet, some educators are telling students to vibe-check AI-produced information. Or to evaluate it without first making sure they know how.
Let’s pair genuine caution about AI with proven search strategies so that students can avoid falling for misinformation and locate trustworthy sources online.