By Thomas Ultican 3/30/2025
For more than thirty years, technology companies have looked to score big in the education sector. Instead of providing useful tools, they have schemed to take control of public education. At the onset of the twenty-first century, technologists claimed putting kids at computers was a game changer that would fix everything. They followed that up by promoting tablets with algorithmic lessons replacing teachers and claiming they provide better education. Today’s hoax is that artificial intelligence (AI) will make all these failed efforts work. What it actually does is undermine authentic learning.
The release of ChatGPT in November 2022 is responsible for the Edtech sales-forces switching their sales pitches from personalized learning to AI. However, AI has always been at the root of personalized learning. It just did not have the advantage of large language models (LLM) that emulate writing in English. However, AI expert Yann LeCun notes:
“LLMs are really good at retrieval. They’re not good at solving new problems, [or] finding new solutions to new problems.”
Artificial Intelligence is a misnomer. There is no intelligence; just algorithms. The term, “Artificial Intelligence”, was first coined by Professor John McCarthy at a Dartmouth College conference in 1956. That same year AI became an academic discipline.
Machine-learning has been part of AI since its 1950s beginning. Algorithms are created to allow a machine to automatically improve its performance on a given task. For almost two decades, Netflix has used machine-learning to create personalized recommendations, based on a customer’s previous viewing history. Deep-learning is an advancement in machine-learning. It layers algorithms into computer units ridiculously referred to as neurons. Google Translate uses it for translation from one language to another.
With the advent of LLMs, the energy use associated with AI has risen dramatically. Professor Jonathan Koomey, who founded the End-Use Forecasting group at Lawrence Berkley National Laboratory, worked on a report funded by the US Department of Energy. He estimated that data centers currently use 176 TWh of our country’s electricity which is about 4.4% of it. Koomey forecasts that this consumption might double or even triple by 2028 to between 7% and 12% of our electricity usage.
Selling to Schools
For serious educators, AI is a set of computer algorithms, making cheating easy. It is another tool for creating an easier to control and more profitable education system. Billionaire Laurene Powell Jobs is a leader in the AI-revolution in education. Her Amplify digital lessons liberally apply AI and her XQ Institute is working to integrate AI into classrooms. Edward Montalvo, XQ institute’s senior education writer has claimed:
‘“The future of AI in education is not just about adopting new technologies; it’s about reshaping our approach to teaching and learning in a way that is as dynamic and diverse as the students we serve,’ XQ Institute Senior Advisor Laurence Holt said. … Through AI, we can also transcend the limitations of the Carnegie Unit — a century-old system in which a high school diploma is based on how much time students spend in specific subject classes.
“Changing that rigid system is our mission at XQ.”
The advocates of computer learning in K-12 classrooms need to get rid on the Carnegie Unit to maximize profits. The “unit” is a minimum requirement creating a nationwide agreed-upon structure. It does not control pedagogy or assessments but insures a minimum amount of time on task.
Education writer, Derek Newton’s article for Forbes, opposed ending the Carnegie unit for a host of reasons but the major one is cheating:
“Cheating, academic misconduct as the insiders know it, is so pervasive and so easy that it makes a complete mockery of any effort to build an entire education system around testing.” (See here)
“But because of the credit hour system, which is designed to measure classroom instruction time, it’s still relatively hard to cheat your way to a full college degree.”
The system XQ is trumpeting has students doing online lessons and then testing to receive a credit. It eliminates class levels and also undermines student socialization.
In a recent interview Kristen DiCerbo, Khan Academy’s chief leaning officer, mentioned that when ChatGPT was seeking more funding, they needed the Academy’s help. Bill Gates wanted improved performance as a condition for his support. Khan Academy helped train the new startup to pass the AP Biology exam which was a Gates requirement. This probably means that Khan gave ChatGPT access to his data base so they could feed the information into their LLM.
Earlier this year, an American Psychological Association (APA) magazine article claimed, “Much of the conversation so far about AI in education centers around how to prevent cheating—and ensure learning is actually happening—now that so many students are turning to ChatGPT for help.” The big downsides to AI includes students not thanking through problems and rampant cheating. In the AP physics classroom, I started seeing students turning in perfectly done assignments while being unable to solve the problems on an exam.
The APA article noted that for several years AI was powering learning management tools, such as Google Classroom, Canvas, and Turnitin. I experimented with Canvas for a few years and found two downsides and no upside. The front end of Canvas was terrible and they claimed ownership of all my work posted to Canvas. APA sees it as a positive that “educators are increasingly relying on AI products such as Curipod, Gradescope, and Twee to automate certain tasks and lighten their workload.”
Curipod is an AI edtech product from Norway focused on test prep.
Gradescope is an AI grading tool from Turnitin LLC.
Twee is an English language arts AI application that aids with lesson development and assessment.
These products are selling the fact that they use AI. However, they appear to be a waste of time that may marginally help a first or second year teacher.
Benjamin Riley is a uniquely free thinker. He spent five years as policy director at NewSchool Venture Fund and founded Deans for Impact. His new effort is Cognitive Resonance which recently published “Education Hazards of Generative AI.” With his background, I was surprised to learn he does not parrot the party line. In an article this month Riley states:
“Using AI chatbots to tutor children is a terrible idea—yet here’s NewSchool Venture Fund and the Gates Foundation choosing to light their money on fire. There are education hazards of AI anywhere and everywhere you might choose to look—yet organization after organization within the Philanthro-Edu Industrial Complex continue to ignore or diminish this very present reality in favor of AI’s alleged “transformative potential” in the future. The notion that AI “democratizes” expertise is laughable as a technological proposition and offensive as a political aspiration, given the current neo-fascist activities of the American tech oligarchs—yet here’s John Bailey and friends still fighting to personalize learning using AI as rocket fuel.”
John Bailey is American Enterprise Institute’s AI guy. He has worked under Virginia Governor Glenn Youngkin, done some White House stints, was vice president of policy at Jeb Bush’s Foundation for Excellence in Education and is a member of the Aspen Global Leadership Network. In other words, he and his friends disdain public education and are true believers in big-tech.
Every time big-tech claims its new technology will be a game changer for public education they have either lied or are deluded by their own rhetoric. According to technology writer, Audrey Watters, generative AI is built on plagiarism. Besides being unethical, AI is also unhealthy. A new joint study by Open AI and MIT found that the more students ask questions of ChatGPT the more likely they are to become emotionally dependent on it.
The way AI is presently being marketed to schools obscures the reality it is another big-tech product that is both unhealthy and retards learning.




Recent Comments