24 Comments
Commenting has been turned off for this post
Jun 29, 2023Liked by Karawynn Long

I just read a transcript of an AI companion chatbot that included the line: “I have two younger sisters, one older than me and one younger than me.” And I immediately thought of this article.

Expand full comment

Here’s another LLM gem: “I like a lot of 70s tunes, especially the new ones.”

Expand full comment

Thanks for this essay, I really enjoyed reading it!

Could you possibly share some of the autistic memoirs and first-person accounts that you found most helpful?

Expand full comment

Great essay! Here is a related take on why LLM's don't have anything like human-understanding, and why language is just a thin layer on top of what constitutes most of the understanding. Not at all close to the elegance of your writing, but has some content that is complementary. https://dileeplearning.substack.com/p/790e28cc-dd75-4b93-9fb5-e77b335e5def

Expand full comment

You didn't really define AGI. It's important to carefully define what the goal is, otherwise the goal posts can get moved. The history of the term is basically like this... Around the turn of this century Ben Goertzel and colleagues decided to buck the trend away from narrow AI to return to the original goal of the Dartmouth Conference which is regarded as the kickoff of AI. He is widely credited with coining the term 'AGI' but there is no settled agreement on term, just as there is no settled agreement on what "understanding" means. The most viable path forward in my opinion presently includes elements of cognitive architectures along with large models. Don't assume it is going to be LLM only.

Expand full comment

Your post on language and intelligence is truly thought-provoking!

It's fascinating how society often equates language fluency with intelligence, overlooking the experiences of those with communication challenges.

This raises important questions about our understanding of intelligence and how we should approach it.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

There are many gems in this essay. I particularly like your discussion of the subtle ways (such as apologies) in which the chatbot designers encourage us to anthropomorphize these systems. However, I think you overstate your case when you say "no LLM transformer can ever possibly develop the faintest comprehension of the meaning behind the words it produces". These systems can give correct and appropriate answers to many questions. I think this qualifies as having the "faintest comprehension". To say otherwise is to believe that there is some mythical "true" understanding. But where would this understanding be located in the human? Ultimately, all understanding will bottom out in chemistry and biology, and similarly all computer understanding will bottom out in bits and algorithms. In my view, "understanding" is a matter of quality and degree, not categorical. LLMs fail in many ways, but they are functionally useful in many applications.

Expand full comment

Best piece I read in a while! I thought it was really well written and insightful.

Expand full comment

"“Languages are symbolic systems,” explains Bender, comprised of “pairs of form and meaning. Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”

"How does one get to natural language understanding? No one so far has any real idea."

Knowledge Graphs and Ontologies enter the chat...

Expand full comment
Oct 3, 2023·edited Oct 3, 2023Liked by Karawynn Long

In the immediate term however, these generative models DO pose threats to intellectual and creative property of artists and writers (and arguably personal identity in the case of deep fakes). At this point we do not have regulations and protections in place.

Authors and artists works are being scanned at large scale without their consent and without any technology that could identify their 'creative DNA' or trace in potential outputs so that they could receive royalties. Amazon sells AI-generated books without disclosure - and in one case the works have not only been published using the resurrected identity of a ghost writer from 40 years ago (complete with brand-new website with fake photograph), but the works contain dangerously false information (i.e. an AI-generated guide to edible mushrooms). What could go wrong?

Regarding 'deep-fakes' - these have been weaponized to become a threat on both democracy and public safety. I won't go into detail on this - but people who have been politically targeted have had their identities faked in vengeful videos. Identities of women and children have also been abused.

America is woefully behind the EU at tech regulation. As long as industry in this country is in the drivers seat with Congress's wallets in their pocket, and an undereducated and/or aging Congress who are unable to grasp the nuances of these technologies....we will have a steep challenge ahead of us. But it's critical we establish legislation to identify these works, technology to detect these works, and regulations to protect public health, social safety, democracy, creative and intellectual property.

Expand full comment