24 Comments
Commenting has been turned off for this post
Jun 29, 2023Liked by Karawynn Long

I just read a transcript of an AI companion chatbot that included the line: “I have two younger sisters, one older than me and one younger than me.” And I immediately thought of this article.

Expand full comment

bwahahaha - ahem

Expand full comment

Here’s another LLM gem: “I like a lot of 70s tunes, especially the new ones.”

Expand full comment

Thanks for this essay, I really enjoyed reading it!

Could you possibly share some of the autistic memoirs and first-person accounts that you found most helpful?

Expand full comment
author
Aug 5, 2023·edited Aug 5, 2023Author

Oh gosh, there were dozens of books, and literally tens of thousands of online posts of various kinds. Can you narrow the scope down a bit? What questions are you trying to find answers to?

Expand full comment

As someone a bit later in life who suspects they may be autistic, I would ideally be interested in any first-person accounts from people who have gone through a similar experience or anything else that might help. Thanks for responding!

Expand full comment

Someone I know who finally got a diagnosis in his 40s highly recommended Strong Female Character by Fern Brady.

Expand full comment
author

So a lot of the writing in the 'late-identified autism' space tends to be oriented towards women, because there are just so dang many of us: society tends to mold girls (and trans kids) in ways that mask our autism and change the way it expresses. But if you are a cis white male autistic who's reached adulthood without being diagnosed, you may find that you have an atypical expression for a man, and perhaps a lot in common with the stories of undiagnosed women.

Strong Female Character is a relatively new book (June 2023) and I haven't gotten to it yet myself, but I'm sure it's great.

If you like podcasts, then I highly recommend Square Peg (https://squarepeg.community/): an ongoing series of one-hour interviews with autistic women and trans people, almost all of them late-identified. Age ranges from early twenties to seventies. (Fern Brady is among the interviewees, btw.)

For books: Neurotribes (Silberman), We're Not Broken (Garcia), Laziness Does Not Exist and Unmasking Autism (Price), I Overcame My Autism and All I Got Was This Lousy Anxiety Disorder (Kurchak), Odd Girl Out (James), Ten Steps to Nanette (Gadsby).

If you have Netflix and haven't done so already, watch all three of Hannah Gadsby's comedy specials. Douglas, the second one, is mostly about her late autism diagnosis, but the pieces are there in all three. Also this TED talk: https://www.ted.com/talks/hannah_gadsby_three_ideas_three_contradictions_or_not?language=en

Websites: Poke around on these:

https://coda.io/@mykola-bilokonsky/public-neurodiversity-support-center

https://neuroclastic.com/

That's not exhaustive, of course, but should keep you busy for a while. :)

Expand full comment

This is great, thank you so much!

Expand full comment

Great essay! Here is a related take on why LLM's don't have anything like human-understanding, and why language is just a thin layer on top of what constitutes most of the understanding. Not at all close to the elegance of your writing, but has some content that is complementary. https://dileeplearning.substack.com/p/790e28cc-dd75-4b93-9fb5-e77b335e5def

Expand full comment

You didn't really define AGI. It's important to carefully define what the goal is, otherwise the goal posts can get moved. The history of the term is basically like this... Around the turn of this century Ben Goertzel and colleagues decided to buck the trend away from narrow AI to return to the original goal of the Dartmouth Conference which is regarded as the kickoff of AI. He is widely credited with coining the term 'AGI' but there is no settled agreement on term, just as there is no settled agreement on what "understanding" means. The most viable path forward in my opinion presently includes elements of cognitive architectures along with large models. Don't assume it is going to be LLM only.

Expand full comment

Your post on language and intelligence is truly thought-provoking!

It's fascinating how society often equates language fluency with intelligence, overlooking the experiences of those with communication challenges.

This raises important questions about our understanding of intelligence and how we should approach it.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment
author

The more I learn, the more I find myself farther and farther from alignment with anyone who wants to create a conscious machine.

Expand full comment

My hope is that immortal conscious machines could achieve great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do (unless they are physically destroyed, of course).

Expand full comment

The only problem with this hope is in the alignment of values and goals of such an AGI with those of us humans. See Nick Bostrum's book "SuperIntelligence"... possibly one of the most thought-provoking, and scary, books I've ever read.

Expand full comment

There are many gems in this essay. I particularly like your discussion of the subtle ways (such as apologies) in which the chatbot designers encourage us to anthropomorphize these systems. However, I think you overstate your case when you say "no LLM transformer can ever possibly develop the faintest comprehension of the meaning behind the words it produces". These systems can give correct and appropriate answers to many questions. I think this qualifies as having the "faintest comprehension". To say otherwise is to believe that there is some mythical "true" understanding. But where would this understanding be located in the human? Ultimately, all understanding will bottom out in chemistry and biology, and similarly all computer understanding will bottom out in bits and algorithms. In my view, "understanding" is a matter of quality and degree, not categorical. LLMs fail in many ways, but they are functionally useful in many applications.

Expand full comment
author

Your definition of 'comprehension' and 'understanding' don't sound at all like the commonly accepted ones. A basic calculator (hardware or software) gives correct and appropriate answers to many questions, so by your logic, calculators would possess comprehension and understanding.

Second, usefulness and comprehension are not correlated concepts; whether an LLM is useful is a categorically different question from whether it has comprehension. (So far, I'd say the answer is 'no' to both; the uses to which current LLMs can be put cause more problems than they solve.)

Expand full comment
Aug 9, 2023Liked by Karawynn Long

Yes, my definitions are technical, which I think is appropriate for discussing AI technology. As I write in my essay https://medium.com/@tdietterich/what-does-it-mean-for-a-machine-to-understand-555485f3ad40, when I ask Siri to "Call Carol" and it dials the correct number, you will have a hard time convincing me that it did not understand my utterance. But this understanding may be shallow (i.e.., Siri has no understanding of the fact that "Carol" is a person, only that the action of "calling Carol" involves initiating a phone call to a specific number). This understanding may also not be systematic--that is, it may only apply to Carol and not to anyone else in my contacts list. Hence, "understanding" is a matter of degrees and gradations (such as scope and systematicity). This applies to human understanding as well, of course. My understanding of superconductivity, for example, is very shallow and entirely rote. But my understanding of how to bake bread is much more nuanced and includes "how the dough should feel" and "how it should smell", and so on.

I completely agree with the main points of your essay. The engineering of the user experience creates the illusion of deeper understanding than is actually present. And I love your insight that this is directly tied to our association of linguistic fluency with depth of understanding and to some of the tactics of the UI, such as the apologies. Thank you for this essay.

Expand full comment
author
Aug 9, 2023·edited Aug 9, 2023Author

Even if there is some technical field in which 'understanding' is defined like you describe, it wouldn't be relevant here; the subject at hand is machine understanding of language, so definitions within the field of computational linguistics are what applies.

And according to expert computational linguists, everything I said is true, and your assertion that LLMs possess understanding is categorically false. (As it happens, Dr. Emily Bender read my article and approved of its content: https://dair-community.social/@emilymbender/110844728602837151 )

My objection to your response lies in the fact that by deliberately obfuscating the correct meaning of the words 'understanding' and 'comprehension', you are contributing to the problem of AI hype and inappropriate anthropomorphization, just as the AI designers are.

Expand full comment
Aug 9, 2023Liked by Karawynn Long

Yes, I have not succeeded in convincing Dr. Bender of my position on this. That's fine. I just wanted to alert you that your particular statement will not be accepted by some folks. It looks like I'm not going to convince you either :-). Again, thank you for this essay.

Expand full comment

Best piece I read in a while! I thought it was really well written and insightful.

Expand full comment

"“Languages are symbolic systems,” explains Bender, comprised of “pairs of form and meaning. Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”

"How does one get to natural language understanding? No one so far has any real idea."

Knowledge Graphs and Ontologies enter the chat...

Expand full comment
Oct 3, 2023·edited Oct 3, 2023Liked by Karawynn Long

In the immediate term however, these generative models DO pose threats to intellectual and creative property of artists and writers (and arguably personal identity in the case of deep fakes). At this point we do not have regulations and protections in place.

Authors and artists works are being scanned at large scale without their consent and without any technology that could identify their 'creative DNA' or trace in potential outputs so that they could receive royalties. Amazon sells AI-generated books without disclosure - and in one case the works have not only been published using the resurrected identity of a ghost writer from 40 years ago (complete with brand-new website with fake photograph), but the works contain dangerously false information (i.e. an AI-generated guide to edible mushrooms). What could go wrong?

Regarding 'deep-fakes' - these have been weaponized to become a threat on both democracy and public safety. I won't go into detail on this - but people who have been politically targeted have had their identities faked in vengeful videos. Identities of women and children have also been abused.

America is woefully behind the EU at tech regulation. As long as industry in this country is in the drivers seat with Congress's wallets in their pocket, and an undereducated and/or aging Congress who are unable to grasp the nuances of these technologies....we will have a steep challenge ahead of us. But it's critical we establish legislation to identify these works, technology to detect these works, and regulations to protect public health, social safety, democracy, creative and intellectual property.

Expand full comment