(Matt Cardy / Getty Images)
President Donald Trump recently issued an executive order titled Advancing Artificial Intelligence Education for American Youth. As with federal educational policy generally, it is not likely to have a significant impact on K-12 education because education has always been a state and local matter. In fact, in 2024, Gov. Glenn Youngkin issued Executive Order 30 and an associated set of education guidelines that have already generated important action related to AI in education, including a summit this week that brings together educators in the K-12 and higher education systems to try to create alignment and educational pathways to best serve students in the commonwealth.
Trump’s EO is a bit redundant to Youngkin’s EO, and both miss the forest for the trees.
K-12 schools serve multiple purposes; two of the most important are to prepare young people to be productive members of a deliberative democracy and to prepare them for the workforce. To properly serve either of those roles has always meant ensuring that students are technologically literate, meaning not necessarily that they are adept at using technology, but rather that they make good choices about what, if any, technologies might be used in personal or professional situations.
Today, artificial intelligence dominates the technology discourse, and the potential effects of AI on the workforce and civic life are significant. Concerning the workforce, the Brookings Institute’s late 2024 analysis concluded that “…more than 30% of all workers could see at least 50% of their occupation’s tasks disrupted by generative AI.” On democracy, the Carnegie Endowment for International Peace writes that “AI models enable malicious actors to manipulate information and disrupt electoral processes, threatening democracies.” This research makes it clear: If public schools are to serve their highest purposes, it is incumbent upon educators to make sure students are AI literate.
Being AI literate, though, means much more than just learning to use AI; it also means being aware of the ethics and potential risks. We know, for example, that AI is hurting the environment and that AI companies are defending themselves against credible claims of intellectual property violations. Students need to know that, too. Additionally, integrating chatbots into the learning process potentially dehumanizes a necessarily human experience. Absent this knowledge and the right skills and dispositions around technology and AI, our young people are less likely to be able to successfully engage in an increasingly technological civic age and are less likely to be prepared to succeed in an increasingly technological workforce.
However, even that broader, ethics-inclusive approach to AI literacy is too narrow. AI is the latest form of technology to capture our attention, but our society is increasingly dominated by other technologies, including technologies of surveillance (some of which are powered by AI). Nearly all aspects of our society, from schooling to law enforcement, are changing because of technological developments. Long ago, institutions of higher education developed whole interdisciplinary programs of study that consider the historical, cultural and social impacts of science and technology on society. This is the broader, more interdisciplinary approach that needs to be integrated into our K-12 schools.
Interdisciplinary programming is challenging in our siloed K-12 system, but it is possible and worth trying. We could, for example, ensure that when students read novels in school, they read one or two books that force discussion about the role of technology in our society. There is no shortage of good books like that. When students learn about history, they should learn about the Luddites. They might learn, for instance, that to be a Luddite was not just to be anti-technology; rather, Luddites started a labor movement that pushed back against the sort of automation that AI could contribute to today. For math, students could study quantitative reasoning and learn about how much of AI and machine learning is based on inferential statistics.
I am the parent of a public school student and a critical friend of technology who wants my child to be able to explore the affordances of technology for learning. However, I am not interested in my child engaging in the sort of AI integration that Trump’s EO speaks to. If my child struggles to grasp a concept, I want them to struggle through it with their classmates and teachers, not some AI chatbot tutor.
And, as a taxpayer, I am not interested in supporting the integration of technology that comes from the “public-private partnerships” that are highlighted in the EO. That’s a not-so-subtle attempt to further enrich the technology leaders who sat on stage at Trump’s inauguration. Instead of teaching my child to use Adobe Firefly to make fancy reports, I want my child’s teachers to help them learn to navigate a world that is increasingly technological and morally complex.
Among some members of society, there is a near-religious devotion to AI. And, coincidentally, how religion is handled in public schools (at least for now, as the Supreme Court newly considers the place of religion in public schools) may be instructive. The common saying has been that schools can teach about religion, but they cannot teach religion. I favor a similar approach to AI in education. We should teach about AI, not teach with or especially for AI. In fact, if our schools are to prepare students like my child for the world ahead, they must largely look away from the narrow and shiny new object of AI and meet the broader societal challenge this technology represents.
SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX