THE SINGULARITY IS HERE, BUT BEING SLOW-WALKED: With GPT-4, OpenAI Is Deliberately Slow Walking To AGI: There are indications openai has intentionally limited gpt’s power in an attempt to manage agi ‘take-off’ risks. “The performance numbers published in the GPT-4 technical report aren’t really like normal benchmarks of a new, leading-edge technical product, where a company builds the highest-performing version it can and then releases benchmarks as an indicator of success and market dominance. Rather, these numbers were selected in advance by the OpenAI team as numbers the public could handle, and that wouldn’t be too disruptive for society. They said, in essence, ‘for GPT-4, we will release a model with these specific scores and no higher. That way, everyone can get used to this level of performance before we dial it up another notch with the next version.'”

Plus: “Everyone who’s working in machine learning, with zero exceptions I can think of, considers this technology to be socially disruptive and to have a reasonable potential for some amount of near-term chaos as we all adjust to what’s happening. I’m not talking about AGI-powered existential risk scenarios, though there are plenty of worries about that. But more along the lines of the kinds of social changes we saw with the smartphone, the internet, or even the printing press, but happening in such a small amount of time that the effects are greatly magnified.”

Related: The Coming ‘Symbolic Analyst’ Meltdown.