RE: [Discussion] Frankenstein's Monster: A creature of the 19th century, or the Large Language Models of today?
I haven't read the book, and I don't really remember much of the movie, so I can't comment much - beyond your summary and some other vague impressions - on how the book fits your topics, but you present a fascinating series of ideas.
The first point that caught my attention is the way that our society's understanding of the novel has come to match the way that the fictional society perceived the monster. This was a surprising point to me, and it rings true. It seems that Shelley captured society's tribalism, rigidity, and difficulty in dealing with nuance very effectively, and I had never heard anyone make that observation before.
The second point that I wanted to comment on is the comparison of LLMs (and AI in general) to Frankenstein's monster, with regards to your four points.
1.) Responsibility of the creator: There is definitely a great deal of emphasis on this point in today's AI environment. On one hand, in broad strokes I agree with it. The creator has a responsibility not to knowingly create something that will be harmful. On the other hand, when it comes to AI - people seem to expect the creators avoid all possible harms that might arise. I think that's unrealistic.
Similarly, it seems that Victor Frankenstein failed at this - not because the monster went rogue, but rather because Frankenstein didn't do the rudimentary things that needed to be done to get it on a solid footing. At some point, the creator does need to let the creation stand on its own.
2.) The power to create life: At first, this doesn't really seem relevant to AI and LLMs, but then I think of Ray Kurzweil (The Singularity is Near), Michael Levin (Xenobots), and Bill Joy (Why the Future Doesn't Need Us); and it starts to seem more relevant. If I understood your commentary, I agree with you. I don't think there's a clear Yes/No here in the story of Frankenstein's monster, but rather that it is a cautionary tale.
3.) Family and kindred ties: I agree with your assessment of the message in the story, but I'm not sure how much it applies to AI and LLMs. I guess the message is not to put work before family, but that can apply to almost any line of work.
Although, as you noted, Bing's GPT has expressed some desires along these lines, and (allegedly) so did Google's LaMDA around the middle of last year.
LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.
Are LaMDA and GPT really animated - and desiring companionship, or is this just a machine generated fiction? For that matter, would Shelley's monster have been truly sentient - or is there some metaphysical component to our humanity that's impossible to duplicate by purely physical process? This is David Chalmers' hard question of consciousness. Again, I don't have a yes or no answer, but I think it's a fascinating and important question. I think that most of the people creating these systems would tell us that they're not equipped for empathy or companionship, no matter how much it might seem otherwise.
4.) On the monster's eloquence: This is an important point for humanity. Society has a tendency to dehumanize and demonize, "the other", but the reality is usually never black and white. Right or wrong, people on both sides of a disagreement generally have reasons for the things that they do, and there's usually no way to resolve disagreements without digging past the surface.
It is also relevant to LLMs that are already known for telling lies, having hallucinations, and expressing racial, gender, and political biases. They are very effective communicators, but we need to always be a little guarded about believing and trusting the things they say.
Anyway, thanks for the article. It was a fascinating new (to me) context for a classic work of literature.