What strikes me most about this is the following (paraphrased) exchange:
MALE: I’m great.
FEMALE: Are you good?
MALE: I’m good. I already said that.
FEMALE: I don’t think you did.
These are two instances of the same program. Note that they started off saying exactly the same thing, and then started to converse. But what happened here? Why do they disagree? I can only imagine three explanations:
a) The Male instance “understands” that “Great” and “Good” are synonymous in this context, but the Female instance does not. Why? [And yeah, the two terms are not strictly synonyms, but still. If someone says “I feel good” it makes sense to ask “But do you feel great?” However, if someone says “I feel great” it makes no sense to ask “do you feel good?”]
b) Neither instance equates “Great” and “Good” in context. The chatbot throws in “I already said that” at random, and it happened to fit when the male said it.
c) The chatbot will sometimes “forget” a part of the conversation previously made.
Not knowing anything about how they’re programmed, I really have no idea. But isn’t it neat that two identically programmed chatbots will disagree about the contents of their own conversation?
What strikes me most about this is the following (paraphrased) exchange:
MALE: I’m great.
FEMALE: Are you good?
MALE: I’m good. I already said that.
FEMALE: I don’t think you did.
These are two instances of the same program. Note that they started off saying exactly the same thing, and then started to converse. But what happened here? Why do they disagree? I can only imagine three explanations:
a) The Male instance “understands” that “Great” and “Good” are synonymous in this context, but the Female instance does not. Why? [And yeah, the two terms are not strictly synonyms, but still. If someone says “I feel good” it makes sense to ask “But do you feel great?” However, if someone says “I feel great” it makes no sense to ask “do you feel good?”]
b) Neither instance equates “Great” and “Good” in context. The chatbot throws in “I already said that” at random, and it happened to fit when the male said it.
c) The chatbot will sometimes “forget” a part of the conversation previously made.
Not knowing anything about how they’re programmed, I really have no idea. But isn’t it neat that two identically programmed chatbots will disagree about the contents of their own conversation?