Tag Archives: artificial-intelligence

AI in School is the Latest Edtech Scam

30 Mar

By Thomas Ultican 3/30/2025

For more than thirty years, technology companies have looked to score big in the education sector. Instead of providing useful tools, they have schemed to take control of public education. At the onset of the twenty-first century, technologists claimed putting kids at computers was a game changer that would fix everything. They followed that up by promoting tablets with algorithmic lessons replacing teachers and claiming they provide better education. Today’s hoax is that artificial intelligence (AI) will make all these failed efforts work. What it actually does is undermine authentic learning. 

The release of ChatGPT in November 2022 is responsible for the Edtech sales-forces switching their sales pitches from personalized learning to AI. However, AI has always been at the root of personalized learning. It just did not have the advantage of large language models (LLM) that emulate writing in English. However, AI expert Yann LeCun notes:

“LLMs are really good at retrieval. They’re not good at solving new problems, [or] finding new solutions to new problems.”

Artificial Intelligence is a misnomer. There is no intelligence; just algorithms. The term, “Artificial Intelligence”, was first coined by Professor John McCarthy at a Dartmouth College conference in 1956. That same year AI became an academic discipline.

Machine-learning has been part of AI since its 1950s beginning. Algorithms are created to allow a machine to automatically improve its performance on a given task. For almost two decades, Netflix has used machine-learning to create personalized recommendations, based on a customer’s previous viewing history. Deep-learning is an advancement in machine-learning. It layers algorithms into computer units ridiculously referred to as neurons. Google Translate uses it for translation from one language to another.

With the advent of LLMs, the energy use associated with AI has risen dramatically. Professor Jonathan Koomey, who founded the End-Use Forecasting group at Lawrence Berkley National Laboratory, worked on a report funded by the US Department of Energy. He estimated that data centers currently use 176 TWh of our country’s electricity which is about 4.4% of it. Koomey forecasts that this consumption might double or even triple by 2028 to between 7% and 12% of our electricity usage.

Selling to Schools

For serious educators, AI is a set of computer algorithms, making cheating easy. It is another tool for creating an easier to control and more profitable education system. Billionaire Laurene Powell Jobs is a leader in the AI-revolution in education. Her Amplify digital lessons liberally apply AI and her XQ Institute is working to integrate AI into classrooms. Edward Montalvo, XQ institute’s senior education writer has claimed:

‘“The future of AI in education is not just about adopting new technologies; it’s about reshaping our approach to teaching and learning in a way that is as dynamic and diverse as the students we serve,’ XQ Institute Senior Advisor Laurence Holt said. … Through AI, we can also transcend the limitations of the Carnegie Unit — a century-old system in which a high school diploma is based on how much time students spend in specific subject classes.

“Changing that rigid system is our mission at XQ.”

The advocates of computer learning in K-12 classrooms need to get rid on the Carnegie Unit to maximize profits. The “unit” is a minimum requirement creating a nationwide agreed-upon structure. It does not control pedagogy or assessments but insures a minimum amount of time on task.

Education writer, Derek Newton’s article for Forbes, opposed ending the Carnegie unit for a host of reasons but the major one is cheating:

“Cheating, academic misconduct as the insiders know it, is so pervasive and so easy that it makes a complete mockery of any effort to build an entire education system around testing.” (See here)

“But because of the credit hour system, which is designed to measure classroom instruction time, it’s still relatively hard to cheat your way to a full college degree.”

The system XQ is trumpeting has students doing online lessons and then testing to receive a credit. It eliminates class levels and also undermines student socialization.

In a recent interview Kristen DiCerbo, Khan Academy’s chief leaning officer, mentioned that when ChatGPT was seeking more funding, they needed the Academy’s help. Bill Gates wanted improved performance as a condition for his support. Khan Academy helped train the new startup to pass the AP Biology exam which was a Gates requirement. This probably means that Khan gave ChatGPT access to his data base so they could feed the information into their LLM.

Earlier this year, an American Psychological Association (APA) magazine article claimed, “Much of the conversation so far about AI in education centers around how to prevent cheating—and ensure learning is actually happening—now that so many students are turning to ChatGPT for help.” The big downsides to AI includes students not thanking through problems and rampant cheating. In the AP physics classroom, I started seeing students turning in perfectly done assignments while being unable to solve the problems on an exam.

The APA article noted that for several years AI was powering learning management tools, such as Google Classroom, Canvas, and Turnitin. I experimented with Canvas for a few years and found two downsides and no upside. The front end of Canvas was terrible and they claimed ownership of all my work posted to Canvas. APA sees it as a positive that “educators are increasingly relying on AI products such as Curipod, Gradescope, and Twee to automate certain tasks and lighten their workload.”

Curipod is an AI edtech product from Norway focused on test prep.

Gradescope is an AI grading tool from Turnitin LLC.

Twee is an English language arts AI application that aids with lesson development and assessment.

These products are selling the fact that they use AI. However, they appear to be a waste of time that may marginally help a first or second year teacher.

Benjamin Riley is a uniquely free thinker. He spent five years as policy director at NewSchool Venture Fund and founded Deans for Impact. His new effort is Cognitive Resonance which recently published Education Hazards of Generative AI.” With his background, I was surprised to learn he does not parrot the party line. In an article this month Riley states:

“Using AI chatbots to tutor children is a terrible idea—yet here’s NewSchool Venture Fund and the Gates Foundation choosing to light their money on fire. There are education hazards of AI anywhere and everywhere you might choose to look—yet organization after organization within the Philanthro-Edu Industrial Complex continue to ignore or diminish this very present reality in favor of AI’s alleged “transformative potential” in the future. The notion that AI “democratizes” expertise is laughable as a technological proposition and offensive as a political aspiration, given the current neo-fascist activities of the American tech oligarchs—yet here’s John Bailey and friends still fighting to personalize learning using AI as rocket fuel.”

John Bailey is American Enterprise Institute’s AI guy. He has worked under Virginia Governor Glenn Youngkin, done some White House stints, was vice president of policy at Jeb Bush’s Foundation for Excellence in Education and is a member of the Aspen Global Leadership Network. In other words, he and his friends disdain public education and are true believers in big-tech.

Every time big-tech claims its new technology will be a game changer for public education they have either lied or are deluded by their own rhetoric. According to technology writer, Audrey Watters, generative AI is built on plagiarism. Besides being unethical, AI is also unhealthy. A new joint study by Open AI and MIT found that the more students ask questions of ChatGPT the more likely they are to become emotionally dependent on it.

The way AI is presently being marketed to schools obscures the reality it is another big-tech product that is both unhealthy and retards learning.

Hyped AI New Personalized Learning

25 Apr

By Thomas Ultican 4/25/2024

In education today, Artificial Intelligence (AI) and personalized learning are the same thing. AI has been around for 70 years and is the technology that drove personalized learning. The release of ChatGPT in November 2022 caused buzz and may be responsible for Edtech sales-forces switching to AI from personalized learning. The personalized learning scam was exposed and AI became the shiny new object.

ChatGPT is a new language model developed by OpenAI. The basis is a giant data base to retrieve query answers and its breakthrough uses human style essays to deliver the responses. This makes cheating easier, giving teachers new issues to confront. To this point, blogger and educator Mercedes Schneider says “AI and I are not friends”, noting:

“As a teacher for many decades, I find increasingly more of my time consumed with devising means to ensure students complete my assignments without the easy-cheat, sustain-my-own-ignorance that AI enables in today’s students– and, it seems, an increasing number of (especially remote) professionals who may be using the corner-chopping ability AI offers to even hold multiple full-time positions.”

Schneider tested ChatGPT typing in “Could you provide some background info on Mercedes Schneider? She resides in Louisiana.” The answer revealed a weakness… much of the information was correct, some wrong, with other information, old and irrelevant. She did not attend University of Southern Louisiana nor received her PhD from LSU. Mercedes took her red pencil to this chatbot answer, “According to her website, she holds a Bachelor of Arts in secondary education ‘(TRUE)’, a Master of Education in gifted education ‘(FALSE)’, and a PhD in curriculum and instruction ‘(FALSE)’, all from the University of Southern Mississippi ‘(FALSE).’”

These errors may be unusual but the chatbot is unreliable.

What is AI?

“Artificial Intelligence” was first coined by Professor John McCarthy for a conference on the subject at Dartmouth College in 1956. It was also founded as an academic discipline in 1956.

Mathematician, computer scientist, and cryptanalyst, Alan Turing, described a test for verifying computer intelligence in a 1956 paper. He proposed a three-player game in which the human “interrogator” is asked to communicate via text with another human and a machine. If the interrogator cannot reliably identify the human, then the machine is judged “intelligent.”

Coursera states, “AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP).” However, none pass the Turing test for machines manifesting intelligence.

Machine-learning has been part of AI since its 1950s beginning. Algorithms are created to allow a machine to improve its performance on a given task automatically. Netflix uses it to create personalized recommendations, based on previous viewing history.

Deep-learning is advancement in machine-learning, layering algorithms into computer units referred to as neurons. Google Translate uses it for translation from one language to another.

Neural Network Cartoon

Natural language processing (NLP) is used in many products and services. Most commonly, NLP is used for voice-activated digital assistants on smartphones, email spam-scanning programs, and translation apps, deciphering foreign languages. ChatGPT uses large language models, an advancement to NLPs enabling a dialog format, to generate text in response to questions or comments posed.

Nick Bostrom, Director of the Future of Humanity Institute at the UK’s Oxford University, said, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”

No machine has passed the Turing test. To this point there is no intelligence associated with AI, just algorithms. Another problem with powerful AI systems is they use a lot of electricity: by 2027, one researcher suggests that collectively, they could consume each year as much as a small country.

Should We Be Afraid?

In May 2023, Geoffrey Hinton, who won the Turing award in 2018 for “deep learning”, a foundation to much of AI in use today, spectacularly quit Google. He said companies, like Google, had stopped being proper stewards for AI in face of competition to advance the technology.

That same month a Scientific American article stated:

“A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.”

However other scientists in the field disagree.

The Guardian reported:

“Jürgen Schmidhuber, who has had a long-running dispute with Hinton and others in his industry over appropriate credit for AI research, says much of these fears are misplaced. He says the best counter to bad actors using AI will be developing good tools with AI.”

“And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now.”

Stanford professor, Andrew Ng, was a part of the Google brain project. He is not worried and in a recent interview stated:

“I can’t prove that AI won’t kill us all, which is akin to proving a negative, any more than I can prove that radio waves being emitted from Earth won’t allow aliens to find us and wipe us out. But I am not overly concerned about our radio waves leading to our extinction, and in a similar way I don’t see how AI could lead to human extinction.”

Meta’s chief AI scientist, Luan LeCun, scoffs at his peers dystopian attitudes, saying, “Some people are seeking attention, other people are naive about what’s really going on today.”

Hopefully dangers from AI are mitigated by the safety concerns addressed and development is not harmed by a flat-earth mentality.

Selling to Schools

Fast Company is a modern business news organization, tracking edtech sales and issues. Their April 16, 2024 article opened with,

“Between the pandemic and the rise of generative AI, the education sector has been in a permanent state of flux over the past few years. For a time, online learning platforms were ascendant, meeting the moment when workplaces and schools alike went remote (and later, hybrid). With the public debut of ChatGPT in 2022, edtech companies—such as edX, which was one of the first online learning giants to launch a ChatGPT plugin—jumped at the opportunity to integrate generative AI into their platforms, while teachers and administrators tried to understand what it could mean in the classroom.”

Generative AI is a tool that generates text, images, videos and other products.

I understand how K-12 students might want to become familiar with new AI tools but expecting them to be a boon to learning seems farfetched. Teachers need to find ways to stop students from misusing it. Clever as they are most students will not make good choices when realizing a chatbot can do homework.

Fast Company pointed out, schools are being inundated with new AI edtech products. George Veletsianos, Professor of learning technologies, University of Minnesota, recently gave purchasing guidance to school leaders in Conversation. Of his five points, point two seems especially relevant:

“Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.”

“Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.”

“Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.”

Experience informs me that there will be many educational benefits from the overhyped AI but money hunger will be lurking. I am guessing AI currently will be of little use for teaching literature, mathematics or most sciences but will be a focus for computer science students.

Do not rush to implement AI tools in the K-12 environment.

Schools are more likely to be fleeced than left behind when the salesman calls. It may be the same person, who was selling “personalized learning” three or four years ago

…  but it is still BAD pedagogy.