Tag Archives: Audrey Watters

No Need Rushing for AI in Education

20 Apr

By Thomas Ultican 4/20/2026

School districts throughout America are being pushed to spend huge dollars to implement artificial intelligence (AI). This Silicon Valley oligarch plan is much more about profits than good education. It is true that AI has great potential but for schools the known downsides are much larger than its present benefits. Education organizations – especially K-12 – should not spend money implementing AI at this time.

I am not the only one giving this counsel. The Brookins Institute released a report this January that concludes, “At this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits.”  (Page 12)

The loudest and most positive commentators about AI in education all have links to tech industries. While not blatantly shilling for edtech, they present AI as inevitable and advise us about what we need to know to get it right. For example, University of San Diego’s Matt Evans wrote for the schools Professional and Continuing Education The Role of Artificial Intelligence in Education: How Educators Can Succeed.

The article is mostly an advertisement for the school’s AI in education master’s program. His article has several sub-headings like “Redefining Education in the Age of AI,” “The Evolving Role of Educators” and “How AI Is Reshaping Learning Institutions.” He does mention that there are some cons to AI like “Equity gaps” and “Overreliance.” In his article, Evans declares:

“We’re in a defining moment for education, where the systems we build today will influence generations of learners. To engage with AI constructively, educators and leaders must commit to continual learning and collaboration.”

He might not be right about that. The AI pitch looks suspiciously like the same education technology song and dance bombarding schools for more than a century. The claim is education technology will personalize learning and engage kids like never before.Teaching machines are not a new idea and their hundred-year-old promises are almost identical to promises we hear about with AI today. In the forward to her book, Teaching Machines, Audrey Watters wrote:

“Education phycologists like Sidney Pressey, the person often credited with inventing the first ‘teaching machine,’ talked about using mechanical devices in the 1920s in ways almost identical to those who push for personalized learning today, all so that, as Pressey put it, a teacher could focus on her ‘real function’ in the classroom: ‘inspirational and thought-stimulating activities,’ including giving each student individualized attention.”   

Algorithmic Decision-Making is Fueling Anxiety

The Dark Side of AI

Last October the Center for Democracy & Technology published “Hand in Hand: Schools’ Embrace of AI Connected to Increased Risks to Students.” It is a report based on survey data put together by Elizabeth Laird, Maddy Dwyer and Hannah Quay-de la Vallee. They opened the report:

“This report details the current status of AI use in schools along with four emerging risks associated with this technology, all of which increase the more that a school uses AI:

  • Data breaches or ransomware attacks;
  • Tech-enabled sexual harassment and bullying;
  • AI systems that do not work as intended; and
  • Troubling interactions between students and technology.”  (Page 5)

The report shares, among other outcomes, the following results.

  • 59% of parents believe AI is exposing children to inappropriate content. (Page 9)
  • 23% of teachers reported a large-scale data breach with AI. (Page 11)
  • 71% of teachers, 72% of parents and 64% of students believe AI is harming critical thinking skills and weakening key skillsets. (Page 22)
  • Deepfakes and non-consensual images of 12% of students are expanding sexual harassment and bullying. (Page 39)

The University of Illinois posted AI in Schools: Pros and Consin October, 2024.Two of the cons they cited are quite significant: “high implementation costs” and “unpredictability and inaccurate information.” The article states, “Simple generative AI systems that teachers can use in lesson planning can cost as little as $25 a month, but larger adaptive learning systems can run in the tens of thousands of dollars.”  They also share, “If the data it draws from is inaccurate or biased, then the information it creates will be inaccurate or biased.”

Justin Reich wrote in the Conversation, “At MIT, I study the history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students.”

The Brookins report made the same point as Reich. They cite a paper from the United Nations Education Scientific and Cultural Organization (2023), one by the Organization of Economic Cooperation and Development (2015) and a paper by Amy West and Hanna Ring published by The International Education Journal: Comparative Perspectives (2023). Based on these papers, they asserted that education systems investing heavily in technology did not improve teaching and reading. (Page 11)

Brookins researchers bolstered this argument stating, “The mobile broadband example illustrates this pattern— while Internet expansion correlates with economic development, a study of 2.5 million 15-year-olds from 82 countries suggests that the rollout of 3G coverage from 2000-2018 produced statistically significant declines in math, reading, and science scores, as well as students’ social relationships and sense of belonging….” (Page 11)

These findings correspond exactly with what I observed in the classroom during that same period.

The Brookins paper notes that AI tools prompt “overreliance, emotional and cognitive dependence, and diminished critical thinking.” (Page 17)

The report also points out that “AI does not possess true intelligence; it operates through statistical pattern recognition rather than reasoning or comprehension.” In the consumer world, speed and engagement are valued over safety or learning. AI frequently hallucinates presenting false or nonsensical information. The paper states, “Student-facing tools often provide an ‘illusion of impact’: assumed to be high-value but frequently modeling poor pedagogy, misunderstanding how children learn, and perpetuating rote approaches….” (Page 123)

Last year, an American Psychological Association magazine claimed, “Much of the conversation so far about AI in education centers around how to prevent cheating—and ensure learning is actually happening—now that so many students are turning to ChatGPT for help.” Two big downsides to AI include students not thinking through problems and rampant cheating.

Benjamin Riley is a uniquely free thinker. He spent five years as policy director at NewSchool Venture Fund and founded Deans for Impact. His new effort is Cognitive Resonance which recently published Education Hazards of Generative AI.” With his background, I was surprised to learn he does not parrot the billionaire line. A year ago Riley wrote:

“Using AI chatbots to tutor children is a terrible idea—yet here’s NewSchool Venture Fund and the Gates Foundation choosing to light their money on fire. There are education hazards of AI anywhere and everywhere you might choose to look—yet organization after organization within the Philanthro-Edu Industrial Complex continue to ignore or diminish this very present reality in favor of AI’s alleged “transformative potential” in the future. The notion that AI “democratizes” expertise is laughable as a technological proposition and offensive as a political aspiration, given the current neo-fascist activities of the American tech oligarchs—yet here’s John Bailey and friends still fighting to personalize learning using AI as rocket fuel.”

Conclusion

For more than thirty years, technology companies have looked to score big in the education sector. Instead of providing useful tools, they have schemed to take control of public education. At the onset of the twenty-first century, technologists claimed that putting kids at computers was a game changer and would fix everything plaguing public schools. Then they began promoting tablets with algorithmic lessons as providing better education than a human teacher. Today’s hoax is that artificial intelligence (AI) will make all these failures work. It is not just an expensive scam; it harms both children and America’s democratic future.

The AI Education Grift

9 Mar

By Thomas Ultican 3/9/2026

Artificial Intelligence (AI) is a billionaire driven con job. In the early 20th century, eugenicists claimed they could improve the human condition by measuring general intelligence and eliminating the bad genetics associated with dense people. Their unsavory ideology posited a racial hierarchy based on faulty intelligence testing. Still today, researchers have never found a reliable way to measure intelligence. The concept of artificial general intelligence (AGI) rests in part on a belief that intelligence can be measured. It is science fiction that is unlikely to ever exist but there is money to be made. Unsurprisingly, tech billionaires are invading America’s schools to advance their latest scam while teachers are busy “AI-proofing” classrooms.

Google announced an AI training for “all six million K-12 teachers and higher education faculty.” They have signed a three-year agreement with ISTE+ASCD to carry out the training using Google’s Gemini and NotebookLM tools.

Benjamin Riley, who founded the think tank Cognitive Resonance, believes the Google partnership is part of an ongoing process making ISTE+ASCD a “shill” for Big Tech. He predicted that much of the training will end up “wasting teachers’ time, Google’s money and ISTE+ASCD’s relevance.”

The Association for Supervision and Curriculum Development (ASCD) began as a part of the National Education Association (NEA) in 1943. In 1972, they separated from the NEA. ISTE was founded in 1979. In the 1980s, ISTE worked on developing education technology standards. In 2015, the national education technology standards were renamed the “ISTE Standards.” In 2023, ISTE and ASCD merged forming ISTE+ASCD. It is the relevance of this organization that Riley claims will be destroyed by their signing the Google training contract.

This past July, the American Federation of Teachers (AFT) announced $23 million in funding from Open AI, Anthropic and Microsoft for a National Academy of AI Instruction to train up to 400,000 educators. The new entity will train teachers “on how to use AI tools for tasks like generating lesson plans.” University of Mississippi researcher, Mark Watkins, described this announcement as “a gigantic public experiment that no one has asked for.”

Technology and education critique Audrey Watters says, “unions should be one of the ways in which workers resist, rather than acquiesce to … the tech industry’s vision of the future.” By joining forces with big tech, AFT is implicitly endorsing its products. Watters continued, “Teaching teachers how to use a suite of Microsoft tools is not so much an ‘academy’ as a storefront.”

Or as Ben Riley wryly put it, “Google and ISTE+ASCD announce new partnership to destroy US education.”

Middle School Students Not Learning Science

In every corner of the United States, people want their children to have a world class education. As a result, throughout the country spending on education is huge. The lords of Silicon Valley are desperate to make AI profitable and see education technology as possible source of big profits. They don’t give a damn about educating children but really want to sell AI no matter how useless or even harmful it may be for education.

Bums Rush for AI

Ted Gioia writes a popular blog about culture. In a post last July, he noted:

“AI is now bundled into all of my Microsoft software.

“Even worse, Microsoft recently raised the price of its subscriptions by $3 per month to cover the additional AI benefits. I get to use my AI companion 60 times per month as part of the deal.”

“Most people won’t pay for AI voluntarily—just 8% according to a recent survey. So they need to bundle it with some other essential product.”

This is a big dilemma for the tech masters. A huge amount of Wall Street money is being poured into AI but profits are not there and maybe never will be. The investors want to see a return.

AI technology is very expensive and environmentally destructive. It is estimated that data centers will consume 1,580 terawatt-hours a year by 2034. One terawatt hour is the equivalent energy of a billion kilowatt hours. The associated data centers are also water hogs. At ChatGPT, for every 5 to 50 responses, two cups of water are consumed. With daily customer usage in the millions, that is a lot of water. (The AI Con – Page 159)

A known high school Spanish teacher and author from Indiana, Matt Miller, says what teachers get from AI company training is over-the-top talk about “how much the world is going to change and how we’re revolutionizing education.” The trainers never address the fact that most students use generative AI for “cognitive offloading.”

I was in the classroom when internet use first exploded on the scene. Within a couple years, I was getting beautifully written physics-problem solutions from most of my students. I soon discovered that there were worked out examples of almost all physics problems available on-line. That particular group of students did not learn the material and did poorly on AP testing. I am quite certain that AI in school will be even more detrimental to learning.

AI, when demonstrated by technology salesmen, is akin to magic. There is a reason a magician never reveals how their tricks work. In the same vein, no AI sentience exists and is not likely to ever exist but claims of AI sentience exist. These fictitious claims endorse thinking about the nature of intelligence that is based in eugenics and race science. (The AI Con – Pages 22-23)

Watters claims, “schools must do everything in their power to protect their faculty, staff, and students from the eugenicists and the fascists and the ‘anti-Woke’ mobs.”

Justin Reich wrote in the Conversation, “At MIT, I study the history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students.”

Watters added:

“So why exactly are we rushing into this whole ‘AI literacy’ thing? I mean, other than the obvious grift, of course.”

Closing Remarks

AI stands for artificial intelligence which is a fraud. There is no intelligence; artificial or otherwise. A huge amount of data—most of it stolen—is run through computer-based algorithms. Much like a plastic extruding machine creates products, these algorithms are word, number and image extruding machines. In their book “The AI Con,” professors Emily Bender and Alex Hana labeled LLMs like ChatGPT “synthetic text extruding machines”. (The AI Con – Page 31) That is a much more accurate and descriptive name than AI.

Some community college districts in California have spent millions on AI-chatbots to help students navigate admissions, financial aid and campus services. Unfortunately, the chatbots do not provide clear and accurate answers.

This points to the same reason; I never trust AI for an internet search because AI makes mistakes. The spending on AI is gargantuan and it is still not reliable. Some of these issues may be overcome but it will never be good at education.

Do we really want to encourage children to use facilities that generates AI child sexual abuse material? Last year, a Stanford Cyber Policy Report stated:

“In this report we aim to understand how educators, platform staff, law enforcement officers, U.S. legislators, and victims are thinking about and responding to AI-generated child sexual abuse material (CSAM).

“Our main findings are that while the prevalence of student-on-student nudify app use in schools is unclear, schools are generally not addressing the risks of nudify apps with students, and some schools that have had a nudify incident have made missteps in their response.”

AI is dangerous, not accurate and I believe it will harm education. There is just no reason for America’s schools to rush into this technology. And when schools do adopt AI, let that adoption be guided by educators and not by technology salesmen.

Machine Teaching Requires Behaviorist Approach

8 Oct

By Thomas Ultican 10/8/2021

The controversial Harvard psychology professor B. F. Skinner (the B. F. stands for Burrhus Frederic) taught pigeons to play ping pong, created a box for tending babies more efficiently called an air crib and became the national mouthpiece for behaviorism. His predecessor in behaviorist theory was Columbia University Psychology Professor Edward Thorndike. In 1898, Thorndike published the law of effect which posited that responses which produce a satisfying effect are more likely to occur again and responses that produce a discomforting effect become less likely to repeat. Skinner developed an enhancement of this learning theory that he called operant conditioning.

Operant conditioning, sometimes referred to as instrumental conditioning, is a method of learning that occurs through rewards and punishments. Skinner believed he could create a machine that would reward students as soon as they got a correct answer and send them for more instruction if they missed an answer. By using “programmed instruction” which broke learning into small chunks, Skinner claimed students would be able to interface with his machines and at a “personalized” rate learn more deeply and efficiently.

The story of these machines and their promises of enhanced learning is chronicled by the amazing Audrey Watters. She has added significantly to the history of mechanized teaching, the philosophical basis supporting it and how it relates to the modern computerized version. Her new book, Teaching Machines: The History of Personalized Learning published by MIT press is wonderfully sourced. In it, Watters states,

“What today’s technology-oriented education reformers claim is a new idea – ‘personalize learning’ – that was unattainable if not unimaginable until recent advances in computing and data analysis has actually been the goal of technology-oriented education reformers for almost a century. Education psychologists like Sidney Pressey, the person often credited with inventing the first ‘teaching machine,’ talked about using mechanical devices in the 1920s in ways almost identical to those who push for personalized learning today. All so that, as Pressey put it, a teacher could focus on her ‘real function’ in the classroom: ‘inspirational and thought-stimulating activities,’ including giving each student individualized attention.” (Teaching Machines page 9)

Audrey Watters has been writing about technology in education for most of the 21st century. She published The Curse of the Monsters of Education Technology in 2016 and based on the research for that book, she made these remarks to a class at MIT.

“I don’t believe we live in a world in which technology is changing faster than it’s ever changed before. I don’t believe we live in a world where people adopt new technologies more rapidly than they’ve done so in the past. … But I do believe we live in an age where technology companies are some of the most powerful corporations in the world, where they are a major influence – and not necessarily in a positive way – on democracy and democratic institutions. (School is one of those institutions. Ideally.) These companies, along with the PR that supports them, sell us products for the future and just as importantly weave stories about the future.”

Will Teaching Machines Replace Teachers?

In 1954, Skinner was provided space at Harvard where he assembled a team of “bright young behaviorists” including Susan Meyer (Markle). They “started work on designing their new teaching machines as well as ‘programs,’ the material that would accompany them.” (Teaching Machines page 135) This led to “programmed instruction.” Watters explained,

“Programmed instruction was individualized instruction. Meyer Markle likened it to the work of a tutor, ‘a master of intellectual teasing’ who adjusts the lesson to her student’s needs but also challenges the student to keep moving forward. … ‘Each student was now to have his own private tutor, encased in a small box.’” (Teaching Machines pages 138 and 139)

According to Watters, Meyer Markle was the most significant contributor to the development of programmed instruction but in the 1950s women like her were professionally undermined and men were normally credited instead. Watters identifies Norman Crowder as the most likely to be credited with innovations in programmed instruction instead of Meyer Markle.

In 1958, Doubleday started publishing Crowder’s series of self-instruction manuals – “TutorTexts.” An ad in Popular Science said TutorText was “a complete programmed teaching machine in book form.” (Teaching Machines pages 139 and 140)

In 1959, Crowder claimed, “Automatic tutoring by intrinsic programming is an individually used, instructorless model of teaching which represents an automation of the classical process of individual tutor.” Watters notes, “While Skinner and Pressey were quick to insist that their teaching machines would not replace teachers, Crowder clearly felt less obligated to do so.” (Teaching Machines page 142)

Many people were giddy about the possibility of replacing teachers with these marvelous new machines. The machines were believed to be creating new markets while improving education. Caution was thrown to the wind. Pressey wrote,

“I was shocked at what followed: the most extraordinary commercialization of a new idea in American education history -…. Then millions of research dollars went into, first, the confident elaboration of these ideas and only slowly into any questioning of them.” (Teaching Machines page 148)

The same greed blinded path has been followed by the promoters of digital education. They continue to over-promise while hyping untested behaviorist based learning at a screen. It is more mindless implementation without questioning.

“Banking model of education”

In Teaching Machines, Watters illuminates the history of promoters like Skinner completely forgetting scholarly caution and diving headlong into achieving lucrative manufacturing deals with corporations like IBM, Rheem and Harcourt Brace. At the same time, door to door encyclopedia salesmen began selling books by Crowder and others that implemented programmed instruction. The parallels with today become obvious.

There is a big difference from the teaching machine era (1928-1980) and today. Then, large technology corporations were not nearly as powerful which meant voices of education professionals started to be heard and the mania subsided.

The Brazilian educator Paulo Freire famous for being jailed by the 1964 Brazilian coup leaders called machine learning the “banking model of education, in which the scope of action allowed to the students extends only as far as receiving, filing and storing the deposits.”  He contrasted that with “problem posing education,” which is a dialogue between teachers and students in which knowledge is jointly constructed. (Teaching Machines page 226)  

Even the father of teaching machines, Sydney Pressey soured on behaviorism. Drawing on the work of Swiss psychologist Jean Piaget, “Pressey challenged behaviorism for failing to adequately account for the developmental stages children pass through – and pass through without ‘so crude and rote process as the accretion of bit learning stuck on by reinforcements.”  He felt the teaching machine movement faced a crisis because of behaviorism. (Teaching Machines page 234)

Noam Chomsky reviewed Skinner’s book Beyond Freedom and Dignity for the New York Review of Books in 1971. In his article “The Case Against B. F. Skinner” he wrote, “Skinner’s science of human behavior, being quite vacuous, is as congenial to the libertarian as to the fascist.” (Teaching Machines pages 239 and 240)

This is just a short taste of the content of Teaching Machines. It is a special book by a special writer.