Note: This piece was written by a member of Illinois’ education community with editorial support from the LTC. The views expressed in this article do not reflect those of the Learning Technology Center, its team, or its partners.
***
As an educational technology director in Lake County, I observe both excitement and concern regarding AI’s increasing presence in schools. While AI offers efficiency and personalized learning with immediate feedback and seemingly frictionless experiences, we must question what is lost by heavily relying on AI tools to promote “frictionless” learning. Applying technoskepticism is crucial for educators and students to critically assess technology’s impact and promote ethical use.
This isn’t about rejecting AI; it’s about being thoughtful and intentional. My own work, including my book AI Goes to School, emphasizes harnessing AI for good. However, based on my background as an educator, department chair, adjunct professor, school instructional technology coach, school leader, and now Director of Innovation and Technology—as well as my own parenting of my four children with my wife, who is also an educator—there’s a compelling argument that true, deep learning often requires a bit of productive struggle, or “friction.”
This is also anchored in much of the past century and more of educational research and pedagogy. Kids need that magic space between too easy and too hard, where they are pushed just enough to struggle in a healthy way in order to maximize learning. This learning space is affectionately called the ZPD, or the Zone of Proximal Development, and Vygotsky, an early 20th-century educational learning theorist, describes this important part of learning.
The Paradox of Easy Answers: What “Frictionless” AI Might Be Costing Us
Think about how we truly learn. It’s rarely about passively receiving information. It’s about grappling with ideas, collaborating with others, and working through challenges. Educational giants like John Dewey, Lev Vygotsky, Jean Piaget, and Paulo Freire all emphasized this idea of active engagement, critical reflection, and social interaction as central to genuine learning.
When AI tools provide instant summaries or perfectly curated answers, they risk bypassing this essential “intellectual labor.” This “frictionless” experience, while convenient, can inadvertently lead to several challenges in our classrooms:
-
Amplified Misinformation: Generative AI can produce convincing but dubious or false information at an unprecedented scale and speed. If students rely on instant answers without the “friction” of evaluating multiple sources, they risk absorbing unchecked misinformation. This directly impacts our goal of developing critically thinking, informed citizens.
-
Algorithmic Bias and Inequity: AI systems reflect the biases in their training data, which can inadvertently disadvantage marginalized students. For example, an AI might inadvertently “reduce reading complexity or offer truncated summaries” for certain demographics, limiting their opportunities for rigorous engagement. This creates a “technological redlining” that undermines the inclusivity social studies—and all education—strives for.
-
Eroding Student Agency: When AI makes tasks too easy, students can become passive consumers of knowledge rather than active co-creators. This can lead to what Paulo Freire called the “banking model” of education, where students are simply deposited information, rather than developing their own critical consciousness and ability to engage with real-world problems. As tech critic Nicholas Carr warns, frictionless digital experiences can “fragment attention, reduce reflective processes, and accelerate shallow engagement.”
Reintroducing Productive Friction: C.O.R.E. and H.E.A.R.T. Frameworks
The good news is that we don’t have to choose between embracing AI and fostering deep learning. The research points to a balanced approach that intentionally reintroduces “productive struggle.” To help educators do this, two complementary frameworks, C.O.R.E. and H.E.A.R.T., have been developed to guide ethical and effective AI integration. These frameworks embed a “technoskeptical” lens—not rejecting technology, but questioning its implications and potential hidden costs.
C.O.R.E.: Cognitive and Civic Anchors
-
Critical Thinking: Teach students to analyze, evaluate, and interrogate AI outputs, comparing them with other sources and engaging in reasoned debate.
-
Openness: Encourage students to see AI information as a starting point, not the final word. Foster inquiry by layering AI responses with human perspectives and diverse viewpoints.
-
Respect: Emphasize empathy, civility, and equitable participation in AI-mediated tasks to counteract potential isolation and ensure all voices are valued.
-
Engagement: Design hands-on, problem-based activities that require genuine collaboration, guided inquiry, and real-world relevance, pushing students beyond passive consumption.
H.E.A.R.T.: Ethical and Emotional Dimensions
-
Honesty: Teach students to transparently confront AI outputs, question authenticity, acknowledge uncertainties, and identify potential biases or errors.
-
Empathy: Ensure AI tasks don’t replace genuine human interaction. Facilitate collaborative reflection where students share their intellectual and emotional responses to AI-supplied content.
-
Accountability: Prompt students and educators to consider the implications of algorithmic decisions: who is harmed by biases? Which voices are excluded?
-
Responsibility: Foster active resistance to frictionless consumption. Students should be responsible for validating claims, raising questions, and engaging in thoughtful dialogue, especially on controversial topics.
-
Thoughtfulness: Encourage metacognitive reflection by asking, “How did we reach this conclusion?” or “What role did AI play in our reasoning process?”
Putting It Into Practice: Classroom Ideas for Reintroducing Friction
Classroom teachers can integrate C.O.R.E. and H.E.A.R.T. frameworks through strategies like algorithmic bias audits, where students research historical events with AI and then critically cross-reference the information with primary sources to uncover biases and discrepancies. In history units, contrasting frictionless AI summaries with primary documents—archival footage, oral histories, and original documents—helps students appreciate the nuanced realities. Additionally, students can critically evaluate AI-generated opposing arguments (“Devil’s Advocate Chatbots” or students) to identify ethical issues and construct informed rebuttals. Structured debates on AI regulation in education, guided by ethical considerations, further develop critical thinking and real-world policy analysis skills.
Beyond the Classroom: Policy and Professional Development
For these strategies to truly take hold, we need broader support. This includes ensuring state and national standards explicitly support critical AI literacy as part of a broader support of digital literacies, and teaching students and educators to engage AI tools and other emerging technologies carefully, critically, and thoughtfully to avoid “frictionless” learning. It also requires equity audits of AI platforms to identify biases and ensure equitable technology access and digital literacy support for all students. Additionally, professional development should equip teachers and students with the skills to interrogate AI tools, detect biases, and design friction-rich learning experiences, including algorithmic literacy training and misinformation simulations.
The Path Forward
AI is here to stay, and new emerging technologies will always rise to challenge us, and their potential is immense. But as educational and technology leaders, we must ensure that our adoption of AI “preserves education’s commitment to informed, engaged, and equitable citizenship.” Try some “friction-full” AI activities. Document what happens when students grapple with AI critically rather than consuming it passively. By intentionally reintroducing productive friction—those messy, effortful, and uncomfortable dimensions of learning—we can safeguard our learning.