Tag Archives: ai

San Diego’s Edtech Lollapalooza

29 Apr

By Thomas Ultican 4/29/2025

Titans of the digital universe and their minions gathered at the ASU+GSV conference in San Diego April 6-9. There was a lot of self-promotion and proposals for creating new education paradigm based on personalized learning powered by artificial intelligence (AI) were everywhere. This year it is almost impossible to find any reporting from the event by “negative Nellies” like myself, on the other hand there are many positive references like Forbes calling it “the Davos of Education.”

MRCC advertises itself as having “25+ years of experience designing and deploying innovative eLearning solutions in collaboration with the brightest thinkers.” On April 21, their Senior Director, Learning Solutions, Kevin Schroeder, published Top Five EdTech Trends from ASU+GSV Summit 2025.” His list:

“1. AI Is a Fundamental Literacy”

“2. Equity in Educational Technology Must Be Intentional”

“3. The Shift to Skills-Based Credentialing”

“4. AI-Driven Storytelling Platforms Gaining Traction”

“5. Collaboration Drives Innovation”

Under point one, he says AI is “a basic literacy on a par with reading and math.” This is surprising to me. I did not realize math was a basic literacy and whatever makes AI a basic literacy is truly puzzling.

It seems like points 2, 4 and 5 were just thrown in with little purpose. I agree edtech should strive for equity but wealthy people are not likely to want their children burdened with it. AI is known for plagiarism so I guess it makes a small amount of sense as a storytelling platform. As far as point 5 goes, if they can get students, teachers and parents to collaborate, it will drive sales.

Point 3 is particularly concerning. Schroeder states:

“Traditional academic transcripts are being replaced and/or supplemented by digital credentials that recognize hands-on skills and real-world experience. Apprenticeships, internships, and project-based learning are now key markers of learner growth.”

At the 2023 ASU+GSV conference, Carnegie and ETS announced a new partnership to create functional testing for competency based education (CBE). The Wellspring Project is one of the entities angling to profit off this scheme.

A Cision PRWeb report states,

“The first phase of the Wellspring Project, led by IMS and funded by the Charles Koch Foundation, explored the feasibility of dynamic, shared competency frameworks for curriculum aligned to workforce needs. … Using learning tools that leverage the IMS Competencies and Academic Standards Exchange® (CASE®) standard, the cohorts mapped co-developed frameworks, digitally linking the data to connect educational program offerings with employer talent needs.”

Because of the limitations put on learning by digital screens, the only reasonable approach possible is CBE. Unfortunately there is a long negative history associated with CBE. The 1970’s “mastery learning” was detested and renamed “outcome based education” in the 1990s. It is now called “competency based education” (CBE). The name changes are due to a five-decade long record of failure. It is still the same mind-numbing approach that 1970s teachers began calling “seats and sheets.”

CBE has the potential to increase edtech profits and reduce education costs by eliminating many teacher salaries. Unfortunately, it remains awful education and children hate it.

One justification for CBE based education is a belief that the purpose of education is employment readiness. Philosophy, literature, art etc. are for children of the wealthy. It is a push toward skills based education which wastes no time on “useless” frills. Children study in isolation at digital screens earning badges as they move through the menu driven learning units.

In 1906, Carnegie foundation developed the Carnegie unit as a measure of student progress. It is based on a credit hour system that requires a minimum time in class. Schools all over America pay attention to the total number of instructional minutes scheduled. A 2015 Carnegie study concluded, “The Carnegie Unit continues to play a vital administrative function in education, organizing the work of students and faculty in a vast array of schools or colleges.” Now, Carnegie Foundation President, Tim Knowles, is calling for CBE to replace the Carnegie unit.

Education writer Derek Newton writing for Forbes opposed the Carnegie-EST turn to CBE for many reasons but the major one is cheating. It is easy to cheat with digital systems. Newton observed, “But because of the credit hour system, which is designed to measure classroom instruction time, it’s still relatively hard to cheat your way to a full college degree.”

The Conference and People

ASU is Arizona State University and GSV is the private equity firm, Global Silicon Valley. GSV advertises itself as “The sector’s preeminent collection of talent & experience—uniquely qualified to partner with, and to elevate, EdTech’s most important companies.” Under their joint leadership, the ASU+GSV annual event has become the world’s premier edtech sales gathering. Sadly, privatizing public education is espoused by many presenters at the conference.

The involvement of ASU marks a big change in direction for the institution. It was not that long ago that David C. Berliner a renowned education psychologist was the dean of the Mary Lou Fulton Teachers College at ASU. At the same time, his colleague and collaborator, Gene V. Glass a Professor emeritus in both Psychology in Education and Education Leadership and Policy Studies was working with him to stop the destruction of public education. Glass is the researcher who coined the term “meta-analysis.” Their spirit has completely disappeared.

Recently the Center for Reinventing Public Education relocated from their University of Washington home to ASU.  

There were over 1,000 speakers listed for this shindig. They were listed in twelve categories. The “startup” group was the largest with 188 speakers. The “Corporate Enterprise” cohort had 136 speakers listed. Microsoft, Google, Pearson, Amazon, Curriculum Associates and many more had speakers listed under Corporate Enterprise.

Scheduled speakers included Pedro Martinez from Chicago Public Schools, Randi Weingarten from the American Federation of Teachers and Arne Duncan representing the Emerson Collective. Of note, the list of speakers included:

  • Michael Cordona – former US Secretary of Education
  • Glen Youngkin – Governor of Virginia
  • Angélica Infante Green – Rhode Island Commissioner of Education
  • Robin Lake – Director of Center for Reinventing Public Education
  • David Steiner – Executive Director John Hopkins Institute of Education Policy
  • Ted Mitchell – President American Council on Education
  • Timothy Knowles – President Carnegie Foundation
  • Sal Khan – Founder Khan Academy
  • Derrick Johnson – President and CEO of NAACP

Secretary of Education, Linda McMahon, spoke at the summit. Besides confusing AI for A1 several times including when saying we are going to start making sure that first graders, or even pre-Ks, have “A1” teaching every year. She also slandered public schools claiming the nation’s low literacy and math scores show it has “gotten to a point that we just can’t keep going along doing what we’re doing.” She is so out of touch with education practices that she believes putting babies at screens is a good idea and does not know that America’s students were set back by COVID but are actually well on their way to recovery.

Opinion

The amount of money and political power at the annual ASU+GSV event is staggering. It has now gotten to the point that there is almost no push back heard. The voices of astute professional educators are completely drowned out.

I have met Randi Weingarten on a few occasions and been in the audience for a speech by Derrick Johnson. I really do like and respect these people but I find their participation in San Diego unwise. Having progressive voices speaking at this conference gives cover to the billionaires who are destroying public education.

Hyped AI New Personalized Learning

25 Apr

By Thomas Ultican 4/25/2024

In education today, Artificial Intelligence (AI) and personalized learning are the same thing. AI has been around for 70 years and is the technology that drove personalized learning. The release of ChatGPT in November 2022 caused buzz and may be responsible for Edtech sales-forces switching to AI from personalized learning. The personalized learning scam was exposed and AI became the shiny new object.

ChatGPT is a new language model developed by OpenAI. The basis is a giant data base to retrieve query answers and its breakthrough uses human style essays to deliver the responses. This makes cheating easier, giving teachers new issues to confront. To this point, blogger and educator Mercedes Schneider says “AI and I are not friends”, noting:

“As a teacher for many decades, I find increasingly more of my time consumed with devising means to ensure students complete my assignments without the easy-cheat, sustain-my-own-ignorance that AI enables in today’s students– and, it seems, an increasing number of (especially remote) professionals who may be using the corner-chopping ability AI offers to even hold multiple full-time positions.”

Schneider tested ChatGPT typing in “Could you provide some background info on Mercedes Schneider? She resides in Louisiana.” The answer revealed a weakness… much of the information was correct, some wrong, with other information, old and irrelevant. She did not attend University of Southern Louisiana nor received her PhD from LSU. Mercedes took her red pencil to this chatbot answer, “According to her website, she holds a Bachelor of Arts in secondary education ‘(TRUE)’, a Master of Education in gifted education ‘(FALSE)’, and a PhD in curriculum and instruction ‘(FALSE)’, all from the University of Southern Mississippi ‘(FALSE).’”

These errors may be unusual but the chatbot is unreliable.

What is AI?

“Artificial Intelligence” was first coined by Professor John McCarthy for a conference on the subject at Dartmouth College in 1956. It was also founded as an academic discipline in 1956.

Mathematician, computer scientist, and cryptanalyst, Alan Turing, described a test for verifying computer intelligence in a 1956 paper. He proposed a three-player game in which the human “interrogator” is asked to communicate via text with another human and a machine. If the interrogator cannot reliably identify the human, then the machine is judged “intelligent.”

Coursera states, “AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP).” However, none pass the Turing test for machines manifesting intelligence.

Machine-learning has been part of AI since its 1950s beginning. Algorithms are created to allow a machine to improve its performance on a given task automatically. Netflix uses it to create personalized recommendations, based on previous viewing history.

Deep-learning is advancement in machine-learning, layering algorithms into computer units referred to as neurons. Google Translate uses it for translation from one language to another.

Neural Network Cartoon

Natural language processing (NLP) is used in many products and services. Most commonly, NLP is used for voice-activated digital assistants on smartphones, email spam-scanning programs, and translation apps, deciphering foreign languages. ChatGPT uses large language models, an advancement to NLPs enabling a dialog format, to generate text in response to questions or comments posed.

Nick Bostrom, Director of the Future of Humanity Institute at the UK’s Oxford University, said, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”

No machine has passed the Turing test. To this point there is no intelligence associated with AI, just algorithms. Another problem with powerful AI systems is they use a lot of electricity: by 2027, one researcher suggests that collectively, they could consume each year as much as a small country.

Should We Be Afraid?

In May 2023, Geoffrey Hinton, who won the Turing award in 2018 for “deep learning”, a foundation to much of AI in use today, spectacularly quit Google. He said companies, like Google, had stopped being proper stewards for AI in face of competition to advance the technology.

That same month a Scientific American article stated:

“A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.”

However other scientists in the field disagree.

The Guardian reported:

“Jürgen Schmidhuber, who has had a long-running dispute with Hinton and others in his industry over appropriate credit for AI research, says much of these fears are misplaced. He says the best counter to bad actors using AI will be developing good tools with AI.”

“And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now.”

Stanford professor, Andrew Ng, was a part of the Google brain project. He is not worried and in a recent interview stated:

“I can’t prove that AI won’t kill us all, which is akin to proving a negative, any more than I can prove that radio waves being emitted from Earth won’t allow aliens to find us and wipe us out. But I am not overly concerned about our radio waves leading to our extinction, and in a similar way I don’t see how AI could lead to human extinction.”

Meta’s chief AI scientist, Luan LeCun, scoffs at his peers dystopian attitudes, saying, “Some people are seeking attention, other people are naive about what’s really going on today.”

Hopefully dangers from AI are mitigated by the safety concerns addressed and development is not harmed by a flat-earth mentality.

Selling to Schools

Fast Company is a modern business news organization, tracking edtech sales and issues. Their April 16, 2024 article opened with,

“Between the pandemic and the rise of generative AI, the education sector has been in a permanent state of flux over the past few years. For a time, online learning platforms were ascendant, meeting the moment when workplaces and schools alike went remote (and later, hybrid). With the public debut of ChatGPT in 2022, edtech companies—such as edX, which was one of the first online learning giants to launch a ChatGPT plugin—jumped at the opportunity to integrate generative AI into their platforms, while teachers and administrators tried to understand what it could mean in the classroom.”

Generative AI is a tool that generates text, images, videos and other products.

I understand how K-12 students might want to become familiar with new AI tools but expecting them to be a boon to learning seems farfetched. Teachers need to find ways to stop students from misusing it. Clever as they are most students will not make good choices when realizing a chatbot can do homework.

Fast Company pointed out, schools are being inundated with new AI edtech products. George Veletsianos, Professor of learning technologies, University of Minnesota, recently gave purchasing guidance to school leaders in Conversation. Of his five points, point two seems especially relevant:

“Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.”

“Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.”

“Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.”

Experience informs me that there will be many educational benefits from the overhyped AI but money hunger will be lurking. I am guessing AI currently will be of little use for teaching literature, mathematics or most sciences but will be a focus for computer science students.

Do not rush to implement AI tools in the K-12 environment.

Schools are more likely to be fleeced than left behind when the salesman calls. It may be the same person, who was selling “personalized learning” three or four years ago

…  but it is still BAD pedagogy.