K-12 IMPLOSION UPDATE: Schools blocking ChatGPT because of cheating.

I’ve been spending some time lately checking out and writing about the latest generation of Artificial Intelligence chatbots that have been released. One of the most formidable is ChatGPT, recently released by AI startup OpenAI. While it doesn’t seem remotely close to being sentient, it’s very powerful and can create reams of very “human-sounding” text on almost any subject you can name when you pose a question to it. But people are already wondering if a fully developed ChatGPT program may wind up replacing some human beings in their jobs, including those of journalists and researchers. An even larger potential concern is whether students could use the chatbot to cheat on tests or write their homework assignments for them. This month, that fear led the New York City school system to ban ChatGPT on all devices in the city’s public schools. (Associated Press)

Ask the new artificial intelligence tool ChatGPT to write an essay about the cause of the American Civil War and you can watch it churn out a persuasive term paper in a matter of seconds.

That’s one reason why New York City school officials this week started blocking the impressive but controversial writing tool that can generate paragraphs of human-like text.

The decision by the largest U.S. school district to restrict the ChatGPT website on school devices and networks could have ripple effects on other schools, and teachers scrambling to figure out how to prevent cheating. The creators of ChatGPT say they’re also looking for ways to detect misuse.

Then there is the issue of ChatGPT having the occasional “hallucination:”

ChatGPT is based on GPT-3.5, but was developed specifically to be a chatbot (“conversational agent” is the preferred term of art in the industry). A limiting factor is that ChatGPT only sports a text interface; there is no API. ChatGPT was trained on a large set of conversational text and is better at holding up a conversation than GPT-3.5 and other generative models. It generates its responses more quickly than GPT-3.5, and its responses are perceived to be more accurate.

However, both models have a tendency to make stuff up, or “hallucinating” things, as those in the industry call it. Various hallucination rates have been cited for ChatGPT between 15% and 21%. GPT-3.5’s hallucination rate, meanwhile, has been pegged from the low 20s to a high of 41%, so ChatGPT has shown improvement in that regard.

Despite the tendency to make things up (which is true with all language models), ChatGPT marks a significant improvement over the AI models that came before it, says Jiang Chen, founder and vice president of machine learning at Moveworks, a Silicon Valley firm that uses language models and other machine learning technologies in its AI conversational platform, which is used by companies in a variety of industries.

“ChatGPT does impress people, surprise people,” says Chen, who previously was a Google engineer who worked on the tech giant’s eponymous search engine. “The reasoning ability is something that probably surprised a lot of machine learning practitioners.”

ChatGPT won’t pass the Turing Test, but we’ve certainly come a long from Eliza.