Tag Archives: Andrew Ng

Hyped AI New Personalized Learning

25 Apr

By Thomas Ultican 4/25/2024

In education today, Artificial Intelligence (AI) and personalized learning are the same thing. AI has been around for 70 years and is the technology that drove personalized learning. The release of ChatGPT in November 2022 caused buzz and may be responsible for Edtech sales-forces switching to AI from personalized learning. The personalized learning scam was exposed and AI became the shiny new object.

ChatGPT is a new language model developed by OpenAI. The basis is a giant data base to retrieve query answers and its breakthrough uses human style essays to deliver the responses. This makes cheating easier, giving teachers new issues to confront. To this point, blogger and educator Mercedes Schneider says “AI and I are not friends”, noting:

“As a teacher for many decades, I find increasingly more of my time consumed with devising means to ensure students complete my assignments without the easy-cheat, sustain-my-own-ignorance that AI enables in today’s students– and, it seems, an increasing number of (especially remote) professionals who may be using the corner-chopping ability AI offers to even hold multiple full-time positions.”

Schneider tested ChatGPT typing in “Could you provide some background info on Mercedes Schneider? She resides in Louisiana.” The answer revealed a weakness… much of the information was correct, some wrong, with other information, old and irrelevant. She did not attend University of Southern Louisiana nor received her PhD from LSU. Mercedes took her red pencil to this chatbot answer, “According to her website, she holds a Bachelor of Arts in secondary education ‘(TRUE)’, a Master of Education in gifted education ‘(FALSE)’, and a PhD in curriculum and instruction ‘(FALSE)’, all from the University of Southern Mississippi ‘(FALSE).’”

These errors may be unusual but the chatbot is unreliable.

What is AI?

“Artificial Intelligence” was first coined by Professor John McCarthy for a conference on the subject at Dartmouth College in 1956. It was also founded as an academic discipline in 1956.

Mathematician, computer scientist, and cryptanalyst, Alan Turing, described a test for verifying computer intelligence in a 1956 paper. He proposed a three-player game in which the human “interrogator” is asked to communicate via text with another human and a machine. If the interrogator cannot reliably identify the human, then the machine is judged “intelligent.”

Coursera states, “AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP).” However, none pass the Turing test for machines manifesting intelligence.

Machine-learning has been part of AI since its 1950s beginning. Algorithms are created to allow a machine to improve its performance on a given task automatically. Netflix uses it to create personalized recommendations, based on previous viewing history.

Deep-learning is advancement in machine-learning, layering algorithms into computer units referred to as neurons. Google Translate uses it for translation from one language to another.

Neural Network Cartoon

Natural language processing (NLP) is used in many products and services. Most commonly, NLP is used for voice-activated digital assistants on smartphones, email spam-scanning programs, and translation apps, deciphering foreign languages. ChatGPT uses large language models, an advancement to NLPs enabling a dialog format, to generate text in response to questions or comments posed.

Nick Bostrom, Director of the Future of Humanity Institute at the UK’s Oxford University, said, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”

No machine has passed the Turing test. To this point there is no intelligence associated with AI, just algorithms. Another problem with powerful AI systems is they use a lot of electricity: by 2027, one researcher suggests that collectively, they could consume each year as much as a small country.

Should We Be Afraid?

In May 2023, Geoffrey Hinton, who won the Turing award in 2018 for “deep learning”, a foundation to much of AI in use today, spectacularly quit Google. He said companies, like Google, had stopped being proper stewards for AI in face of competition to advance the technology.

That same month a Scientific American article stated:

“A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.”

However other scientists in the field disagree.

The Guardian reported:

“Jürgen Schmidhuber, who has had a long-running dispute with Hinton and others in his industry over appropriate credit for AI research, says much of these fears are misplaced. He says the best counter to bad actors using AI will be developing good tools with AI.”

“And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now.”

Stanford professor, Andrew Ng, was a part of the Google brain project. He is not worried and in a recent interview stated:

“I can’t prove that AI won’t kill us all, which is akin to proving a negative, any more than I can prove that radio waves being emitted from Earth won’t allow aliens to find us and wipe us out. But I am not overly concerned about our radio waves leading to our extinction, and in a similar way I don’t see how AI could lead to human extinction.”

Meta’s chief AI scientist, Luan LeCun, scoffs at his peers dystopian attitudes, saying, “Some people are seeking attention, other people are naive about what’s really going on today.”

Hopefully dangers from AI are mitigated by the safety concerns addressed and development is not harmed by a flat-earth mentality.

Selling to Schools

Fast Company is a modern business news organization, tracking edtech sales and issues. Their April 16, 2024 article opened with,

“Between the pandemic and the rise of generative AI, the education sector has been in a permanent state of flux over the past few years. For a time, online learning platforms were ascendant, meeting the moment when workplaces and schools alike went remote (and later, hybrid). With the public debut of ChatGPT in 2022, edtech companies—such as edX, which was one of the first online learning giants to launch a ChatGPT plugin—jumped at the opportunity to integrate generative AI into their platforms, while teachers and administrators tried to understand what it could mean in the classroom.”

Generative AI is a tool that generates text, images, videos and other products.

I understand how K-12 students might want to become familiar with new AI tools but expecting them to be a boon to learning seems farfetched. Teachers need to find ways to stop students from misusing it. Clever as they are most students will not make good choices when realizing a chatbot can do homework.

Fast Company pointed out, schools are being inundated with new AI edtech products. George Veletsianos, Professor of learning technologies, University of Minnesota, recently gave purchasing guidance to school leaders in Conversation. Of his five points, point two seems especially relevant:

“Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.”

“Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.”

“Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.”

Experience informs me that there will be many educational benefits from the overhyped AI but money hunger will be lurking. I am guessing AI currently will be of little use for teaching literature, mathematics or most sciences but will be a focus for computer science students.

Do not rush to implement AI tools in the K-12 environment.

Schools are more likely to be fleeced than left behind when the salesman calls. It may be the same person, who was selling “personalized learning” three or four years ago

…  but it is still BAD pedagogy.