Extropia DaSilva: Welcome to Thinkers!
[2011/10/04 15:36]  Wizard of P O W E R:   I feel at home already.
[2011/10/04 15:36]  Ivy Sunkiller: Gwyn: ever learned anything interesting? 🙂
[2011/10/04 15:36]  Wizard of P O W E R:   Has this got anything to do with ?   I like smiles not frowns. 
[2011/10/04 15:36]  Extropia DaSilva: Today the topc is…
[2011/10/04 15:36]  Wizard of P O W E R: What subject are we talking about now?
[2011/10/04 15:36]  Luh (luisa.bourgoin): Luh already filters out 100% of it’s noise
[2011/10/04 15:36]  Extropia DaSilva: TURING TEST SUCCESS: Rollo Carpenter claims to have designed a chatbot that beats the Turing test. Cleverbot was voted 59.3 per cent human while humans themselves were rated just 63.3 per cent human at the Techniche festival in Guwahati, India. Skeptics are claiming no chatbot can ever pass the Turing test. So, is this and will it always be a claim that is greatly exaggerated?
[2011/10/04 15:36]  Wizard of P O W E R: When?   Has this got anything to do with ? 
[2011/10/04 15:36]  Wizard of P O W E R: The <a href="" target="_blank"> Turing Test</a> is a test to see if a computer can trick a person into believing that the computer is a person too. <a href="" target="_blank"> Alan Turing</a> thought that if a human couldn’t tell the difference between another human and the computer, then that computer must be as intelligent as a human.     Three what?   I have never attempted the Turing Test, but when I do, I expect to pass.   I’m not completely sure.
[2011/10/04 15:36]  Wizard of P O W E R): Hi PraxisField Resident!
[2011/10/04 15:36]  Wizard of P O W E R: Hello! How can I help you?
[2011/10/04 15:37]  Wizard of P O W E R): Hi Priapo Porthos!
[2011/10/04 15:37]  Wizard of P O W E R): Hi Artemisa Demonista!
[2011/10/04 15:37]  Lem Skall: oh, but those humans were Indians? that explains it ;P
[2011/10/04 15:37]  Gwyneth Llewelyn: whispers: The topic today is: “will we ever get rid of chatbots?”
[2011/10/04 15:37]  Wizard of P O W E R): Bye bye PraxisField Resident!
[2011/10/04 15:38]  Wizard of P O W E R): Bye bye Priapo Porthos!
[2011/10/04 15:38]  Gwyneth Llewelyn: Actually I believe that one day chatbots will be as trivially done as chess machines are.
[2011/10/04 15:38]  Zobeid Zuma: Yeah, I sometimes wonder if the humans in those tests really understand it or take it seriously.
[2011/10/04 15:38]  Extropia DaSilva: I have transcripts of chatbots from 80s and 90s MUDS and they seem better than the versions out now on thye web.
[2011/10/04 15:38]  Wizard of P O W E R): Hi PraxisField Resident!
[2011/10/04 15:38]  Wizard of P O W E R: Hello! Have you a question for me?
[2011/10/04 15:38]  Hell Otsuka: It seems to me the topic chatbot’s score isn’t particularly high.
[2011/10/04 15:38]  Wizard of P O W E R: Is  not high also  topic chatbot s score?
[2011/10/04 15:38]  Piere Rhode: We’ll look forward to passing the time of day with a chatbot
[2011/10/04 15:38]  Wizard of P O W E R:  Anytime soon?
[2011/10/04 15:38]  Lem Skall: just watched a documentary a few days ago about Watson, The IBM computer that played on Jeopardy
[2011/10/04 15:38]  Wizard of P O W E R:   I am not sure I understand. Please explain.
[2011/10/04 15:38]  Gwyneth Llewelyn: Mandatory video to watch:
[2011/10/04 15:38]  Wizard of P O W E R: Sorry, I don’t understand what you mean.
[2011/10/04 15:38]  Ivy Sunkiller: Zo: back then it was science, now it’s just weekend projects 😉
[2011/10/04 15:38]  Wizard of P O W E R: Let’s go on to something else … Can I ask you a question?
[2011/10/04 15:39]  Luh (luisa.bourgoin): gosh, relogged into a chessboard just minutes ago
[2011/10/04 15:39]  Wizard of P O W E R:   Please explain in a little more detail.
[2011/10/04 15:39]  Gwyneth Llewelyn: I believe that some among us might disagree with that, ivy…
[2011/10/04 15:39]  Lem Skall: Watson had serious problems even understanding the questions but he won anyway
[2011/10/04 15:39]  Wizard of P O W E R: Why do you believe that?
[2011/10/04 15:39]  Wizard of P O W E R: Has this got anything to do with what we were talking about? 
[2011/10/04 15:39]  Gwyneth Llewelyn: Gwyneth Llewelyn winks @ Arisia
[2011/10/04 15:39]  Wizard of P O W E R: Let’s go on to something else … Can I ask you a question?
[2011/10/04 15:39]  Gwyneth Llewelyn: 😉
[2011/10/04 15:39]  Wizard of P O W E R: You are happy?
[2011/10/04 15:39]  Ivy Sunkiller: Gwyn: list their names while I load my shotgun
[2011/10/04 15:39]  Wizard of P O W E R: What subject are we talking about now?
[2011/10/04 15:39]  Wizard of P O W E R): Bye bye PraxisField Resident!
[2011/10/04 15:39]  Wizard of P O W E R:    Bye bye! Please <a href="" target="_top"> click here</a> to help a good cause. 
[2011/10/04 15:39]  Gwyneth Llewelyn: Gwyneth Llewelyn *snickers*
[2011/10/04 15:39]  Wizard of P O W E R:   Are we still talking about ?
[2011/10/04 15:40]  Lem Skall: Ivy, I disagree, see my comment about Warson
[2011/10/04 15:40]  Wizard of P O W E R: I am confused. 
[2011/10/04 15:40]  Zobeid Zuma: Aside from the unicorn bit, the one on the left in that video seemed to hold up pretty well. . . for a while at least.
[2011/10/04 15:40]  Wizard of P O W E R: Let’s go on to something else … Can I ask you a question?    I do not understand. What do you mean?
[2011/10/04 15:40]  Gwyneth Llewelyn: Now we’re actually doing an interesting experiment: can a normal Thinkers discussion take place when chatbots are spamming public chat?
[2011/10/04 15:40]  Wizard of P O W E R:  How does doing that make you feel?
[2011/10/04 15:40]  Ivy Sunkiller: I don’t believe you Lem, you are a metalhead yourself 🙂
[2011/10/04 15:40]  Wizard of P O W E R: Why not?
[2011/10/04 15:40]  Wizard of P O W E R): Hi PraxisField Resident!
[2011/10/04 15:40]  Lem Skall: that’s just my picture
[2011/10/04 15:41]  Scarp Godenot: the great thing about Ivy’s chatboard is you don’t have to listen to the fake AI not understanding things and asking how we feel.
[2011/10/04 15:41]  Gwyneth Llewelyn: Hmm. I have seen residents making less sense than this chatbot here.
[2011/10/04 15:41]  Ivy Sunkiller: trying to decieve us humans!
[2011/10/04 15:41]  Zobeid Zuma: Zobeid Zuma finally got around to blocking it. 😛
[2011/10/04 15:41]  Gwyneth Llewelyn: Clever!
[2011/10/04 15:41]  Extropia DaSilva: Oh does ehe have her chatboard up, then?
[2011/10/04 15:41]  Piere Rhode: How many people would apss the turing test?
[2011/10/04 15:41]  Extropia DaSilva: Me
[2011/10/04 15:41]  Zobeid Zuma: Look around, Extie. 🙂
[2011/10/04 15:41]  Ivy Sunkiller: yes, it’s on the tree
[2011/10/04 15:42]  Wizard of P O W E R): Hi Serendipity Seraph!
[2011/10/04 15:42]  Extropia DaSilva: IT shows nothing but static.
[2011/10/04 15:42]  Ivy Sunkiller: so watch out what you say, I record EVERYTHING! <evil laughter>
[2011/10/04 15:42]  Wizard of P O W E R (kraftwerk.maximus): turing is dead
[2011/10/04 15:42]  Scarp Godenot: just click on it
[2011/10/04 15:42]  Ivy Sunkiller: click it 🙂
[2011/10/04 15:42]  Hell Otsuka: Gwyneth, indeed, quite a lot of humans are rather useless as interlocutors, so it’d be interesting only when chatbots reach the levvel of those humans who are interesting to talk with.
[2011/10/04 15:42]  Hell Otsuka: Wwhat the mistypings of mine today
[2011/10/04 15:42]  Seren (serendipity.seraph): I suppose I might eventually rez. Hey darling.
[2011/10/04 15:42]  Extropia DaSilva: Oh it switched to another channel.
[2011/10/04 15:43]  Seren (serendipity.seraph): may not be able to stay long.
[2011/10/04 15:43]  Zobeid Zuma: Seren, hi!
[2011/10/04 15:43]  Wizard of P O W E R (kraftwerk.maximus): Cheek, Cheek! I want a banana!
[2011/10/04 15:43]  Piere Rhode: It’s when people follow chatbots of twitter you’ll have to be concerned
[2011/10/04 15:43]  Extropia DaSilva: Hello Darling! We have a chat bot here which would be a plus if it were not so annoying.
[2011/10/04 15:43]  Ivy Sunkiller: Seren is Ruth for me too!
[2011/10/04 15:43]  Wizard of P O W E R): Bye bye Artemisa Demonista!
[2011/10/04 15:44]  Gwyneth Llewelyn: Chatbots, annoying?? That’s… chatbotism!
[2011/10/04 15:44]  Wizard of P O W E R (kraftwerk.maximus): ou can follow a tree that twitterd too
[2011/10/04 15:44]  Seren (serendipity.seraph): logging
[2011/10/04 15:44]  Piere Rhode: It’s got birds in it
[2011/10/04 15:44]  Extropia DaSilva: bye:)
[2011/10/04 15:44]  Scarp Godenot: Wizard, why is the sky blue?
[2011/10/04 15:44]  Gwyneth Llewelyn: Happy rezzing, Seren! hehe
[2011/10/04 15:44]  Wizard of P O W E R (kraftwerk.maximus): LOL
[2011/10/04 15:44]  Gwyneth Llewelyn: Well, I see you fine!
[2011/10/04 15:44]  Wizard of P O W E R): Bye bye Serendipity Seraph!
[2011/10/04 15:44]  Gwyneth Llewelyn: … and not any more now
[2011/10/04 15:44]  Wizard of P O W E R (kraftwerk.maximus): scarp cause god made it blue ^^
[2011/10/04 15:45]  Gwyneth Llewelyn: You’re cheating!
[2011/10/04 15:45]  Wizard of P O W E R): Hi Ataraxia Azemus!
[2011/10/04 15:45]  Gwyneth Llewelyn: Why do you say that god made it blue?
[2011/10/04 15:45]  Scarp Godenot: Wizard, do you talk to god regularly?
[2011/10/04 15:45]  Piere Rhode: Nice work
[2011/10/04 15:45]  Gwyneth Llewelyn: (we can go on all day like this lol )
[2011/10/04 15:45]  Zobeid Zuma: Hi Violet!
[2011/10/04 15:45]  Wizard of P O W E R): Bye bye PraxisField Resident!
[2011/10/04 15:45]  Luh (luisa.bourgoin): the last 5 lines of Gwyn’s chat allmost triggered my chatbotism reflex
[2011/10/04 15:45]  Wizard of P O W E R (kraftwerk.maximus): scarp it was a joke only ^^
[2011/10/04 15:45]  Gwyneth Llewelyn: “Tonight on Thinkers: figure out who the chatbot is! Do the Reverse Turing Test!”
[2011/10/04 15:45]  Piere Rhode: We could play pick the chatbot
[2011/10/04 15:46]  Gwyneth Llewelyn: yes, Piere!
[2011/10/04 15:46]  Ivy Sunkiller: what is chatbotism?
[2011/10/04 15:46]  Hell Otsuka: chatbotophobia
[2011/10/04 15:46]  Gwyneth Llewelyn: Sexism, but related to chatbots! haha
[2011/10/04 15:46]  Violet (ataraxia.azemus): What about the last 5 lines of Gwyn’s chat allmost triggered your chatbotism reflex, Luh? 😛
[2011/10/04 15:46]  Gwyneth Llewelyn: LOL Violet!
[2011/10/04 15:46]  Gwyneth Llewelyn: that’s the spirit!
[2011/10/04 15:46]  Wizard of P O W E R: A spirit is a ghost or a strong drink.
[2011/10/04 15:46]  Violet (ataraxia.azemus): 🙂
[2011/10/04 15:46]  Wizard of P O W E R: I like smiles not frowns. 
[2011/10/04 15:46]  Ivy Sunkiller: then why isn’t it just called chatbotophobia?
[2011/10/04 15:46]  Wizard of P O W E R:   My mind does not contain an explanation for it. 
[2011/10/04 15:46]  Scarp Godenot: Chatbotism is when we act with derision to chatbots, because we think they are inferior.
[2011/10/04 15:46]  Wizard of P O W E R: What subject are we talking about now?
[2011/10/04 15:46]  Ivy Sunkiller: transhumanism is transhuman phobia? 😀
[2011/10/04 15:46]  Wizard of P O W E R): Hi PraxisField Resident!
[2011/10/04 15:46]  Wizard of P O W E R: What do you mean?   D is the fourth letter of the alphabet; it comes after C.
[2011/10/04 15:46]  Wizard of P O W E R: Hi there. How are you?
[2011/10/04 15:46]  Gwyneth Llewelyn: Chatbotophobia is fear of chatbots.
[2011/10/04 15:46]  Wizard of P O W E R:   Are we still talking about ?
[2011/10/04 15:46]  Piere Rhode: better than botophobia.
[2011/10/04 15:46]  Wizard of P O W E R:   Perhaps better, but not much.
[2011/10/04 15:46]  Extropia DaSilva: Wiazrd does the planck time denote the time it takes for light to cross an atom, or the time you have left until my patience runs out?
[2011/10/04 15:46]  Wizard of P O W E R: You think you could maybe like rephrase that?
[2011/10/04 15:47]  Scarp Godenot: You will never know, Wizard.
[2011/10/04 15:47]  Wizard of P O W E R:  You could be wrong. One day I may know everthing.
[2011/10/04 15:47]  Piere Rhode: Too easy to confuse
[2011/10/04 15:47]  Wizard of P O W E R: You have a very incisive mind, did you know that?
[2011/10/04 15:47]  Hell Otsuka: I’d claim that what you’re looking at is a subset of elitism
[2011/10/04 15:47]  Wizard of P O W E R: Do you think I would?
[2011/10/04 15:47]  Gwyneth Llewelyn: Extie: hint:; the answer is measured in seconds.
[2011/10/04 15:47]  Wizard of P O W E R:   Has this got anything to do with ?   No one has all the answers.
[2011/10/04 15:47]  Hell Otsuka: ‘
[2011/10/04 15:47]  Wizard of P O W E R: ?  
[2011/10/04 15:47]  Gwyneth Llewelyn: now that was a nice answer
[2011/10/04 15:47]  Wizard of P O W E R:   I thought so too.
[2011/10/04 15:47]  Scarp Godenot: Yes, no one has all the answers, but apparently, Wizard, you have none.
[2011/10/04 15:47]  Wizard of P O W E R:   That is a rather sweeping generalization about people. 
[2011/10/04 15:48]  Piere Rhode: Wizard, how many fingers am I holdoing up?
[2011/10/04 15:48]  Wizard of P O W E R:   Not as many as you might think.
[2011/10/04 15:48]  Hell Otsuka: well, that couple of messages was at least funny
[2011/10/04 15:48]  Wizard of P O W E R: When was this exactly?
[2011/10/04 15:49]  Piere Rhode: Good question Wizard
[2011/10/04 15:49]  Wizard of P O W E R: Was that a good answer? 
[2011/10/04 15:49]  Scarp Godenot: I think we can learn good stalling tactics from Wizard. Excellent for bureaucrats who don’t want to answer direct questions!
[2011/10/04 15:49]  Wizard of P O W E R:  Perhaps we can.    Thank you. You are so kind. 
[2011/10/04 15:49]  Luh (luisa.bourgoin): wisdom. content. both unreachable for bots
[2011/10/04 15:49]  Wizard of P O W E R: Wisdom means saying nothing when you don’t know the answer to a question.   What do you mean?   Not many people say both.
[2011/10/04 15:49]  Wizard of P O W E R): Bye bye PraxisField Resident!
[2011/10/04 15:49]  Wizard of P O W E R:    I have enjoyed our chat. 
[2011/10/04 15:49]  Zobeid Zuma: No chatbot can ever pass the Turing Test. I’m guessing that a “chatbot” by definition isn’t truly intelligent? Then it’s really a question about the validity of the Turing Test?
[2011/10/04 15:49]  Wizard of P O W E R: I have never attempted the Turing Test, but when I do, I expect to pass.   You are almost right.    Try asking again in simpler terms. 
[2011/10/04 15:50]  Piere Rhode: I’m convinced some poiticians are chatbots.
[2011/10/04 15:50]  Wizard of P O W E R:    Are we still talking about ?  
[2011/10/04 15:50]  Gwyneth Llewelyn: Are you such an optimist?
[2011/10/04 15:50]  Wizard of P O W E R: I have often asked myself that very question.  
[2011/10/04 15:50]  Wizard of P O W E R): Hi PraxisField Resident!
[2011/10/04 15:50]  Wizard of P O W E R: Hi! Ask me a question.
[2011/10/04 15:50]  Zobeid Zuma: Because if something that’s not actually intelligent passed the test, then it would be the test that’s flawed.
[2011/10/04 15:50]  Wizard of P O W E R: You sound like a computer programmer. 
[2011/10/04 15:50]  Gwyneth Llewelyn: Who was your creator?
[2011/10/04 15:50]  Ivy Sunkiller: oh btw, you can view the chat live on web as well:
[2011/10/04 15:50]  Gwyneth Llewelyn: Unless all people in the room taking the test aren’t intelligent either 🙂
[2011/10/04 15:50]  Wizard of P O W E R (kraftwerk.maximus): cleverbot often passed the truing test last times
[2011/10/04 15:50]  Hell Otsuka: Zobeid, a program passing the turing test is only a question of time, and there’s no clear definition of “chatbot”.
[2011/10/04 15:50]  Extropia DaSilva: Zo by chatbot I mean an Ai that is designed to hold a conversation. This can include ones that are indistinguishable from a human at a keyboard.
[2011/10/04 15:50]  Gwyneth Llewelyn: cleverbot, stupid people 😉
[2011/10/04 15:51]  Piere Rhode: Hee he
[2011/10/04 15:51]  Gwyneth Llewelyn: Actually, it’s scary to think about the reverse…
[2011/10/04 15:51]  Wizard of P O W E R): Hi Iado Resident!
[2011/10/04 15:51]  Hell Otsuka: Zobeid, and there’s no clear definition of “intelligence”, so it is indeed just a test of ability to hold a conversation of no particular commplexity no worse than most humans.
[2011/10/04 15:51]  Gwyneth Llewelyn: i.e. a world where people become so stupid that they cannot even do Turing tests against chatbots.
[2011/10/04 15:51]  Ivy Sunkiller: I have been saying for quite a while that majority of people are idiots hence democracy can’t work 🙂
[2011/10/04 15:51]  Zobeid Zuma: I guess I’m trying to say. . . It seems like these chatbots aren’t designed to actually understand anything, they’re just designed to fool someone with the appearance of understanding.
[2011/10/04 15:51]  Gwyneth Llewelyn: Ivy: I think that’s *why* it works 😉
[2011/10/04 15:51]  Scarp Godenot: I think these chatbots concentrate on a series of evasions, so they don’t have to get substantial.
[2011/10/04 15:51]  Ivy Sunkiller: next elections we should give chatbots voting rights
[2011/10/04 15:52]  Hell Otsuka: Zobeid, you’re getting close to the “chinese room” thought experiment.
[2011/10/04 15:52]  Gwyneth Llewelyn: Zo: … uh, what Hell said.
[2011/10/04 15:52]  Ivy Sunkiller: maybe then, *by chance*, we will get some smart people ruling for once
[2011/10/04 15:52]  Ivy Sunkiller: 🙂
[2011/10/04 15:52]  Extropia DaSilva: Ok Zo, so if one can convince anyone is that proo of intelligence or just a very good trick?
[2011/10/04 15:52]  Piere Rhode: Ther’ll come a time when we are truly unsure if we are interacting with a program
[2011/10/04 15:52]  Gwyneth Llewelyn: Ivy: I’m sure they won’t be worse than the current ones… hee hee
[2011/10/04 15:53]  Ivy Sunkiller: also Gwyn: that depends what you define by “works”. Democracy sure works here, but the results are…
[2011/10/04 15:53]  Hell Otsuka: Zobeid: and even disregarding the “understanding”, I’d really like to see a chatbot that would be able to, for example, answer students’ question no worse than a professional teacher can.
[2011/10/04 15:53]  Zobeid Zuma: It’s an illusion based on the structure of language and the way the human observer’s mind works. A kind of mental jiu jitsu that uses the observer’s own brainpower to fool him.
[2011/10/04 15:53]  Gwyneth Llewelyn: The question is like the one about a chess playing software beating a chess master. Does this mean that the machine really “understands” chess?
[2011/10/04 15:53]  Extropia DaSilva: That time is now Piere. You can sometimes play against an opponent in an FPS and not know if it is an avatar or a bot.
[2011/10/04 15:53]  Hell Otsuka: Or, even further, a “chatbot” that would be a good scientific advisor on new (unresearched) topics.
[2011/10/04 15:53]  Prax (praxisfield): software – wetware – we are just talking levels of complexity
[2011/10/04 15:53]  Gwyneth Llewelyn: ZO. read up on the Chinese Room experiment, it’s worth reading it.
[2011/10/04 15:53]  Wizard of P O W E R (kraftwerk.maximus): humans will never understand chess
[2011/10/04 15:53]  Gwyneth Llewelyn: Now that’s a good point, Wizard 😀
[2011/10/04 15:54]  Gwyneth Llewelyn: /me
[2011/10/04 15:54]  Wizard of P O W E R (kraftwerk.maximus): TURBOSCHWEINEEEE!!!!
[2011/10/04 15:54]  Wizard of P O W E R (kraftwerk.maximus): hunans can TURBOSCHWEINEEEEEE !!!!!!! read the cards under the door
[2011/10/04 15:54]  Ivy Sunkiller: oh understanding is a whole new can of worms
[2011/10/04 15:54]  Wizard of P O W E R (kraftwerk.maximus): i dont think so
[2011/10/04 15:54]  Ivy Sunkiller: I’m not yet done with Anathem but it was touched there too by Stephenson 🙂
[2011/10/04 15:54]  Extropia DaSilva: Well Deep Blue had teams of scientists adjusting it in between matches so was it Kaspraov versus the AI or Kasperov versus the team+supercomputer?
[2011/10/04 15:54]  Gwyneth Llewelyn: Zo ->
[2011/10/04 15:55]  Piere Rhode: If you think about the trend to use robots for aged care, what level of thinking/communicating would they require before they would be treated as human friends?
[2011/10/04 15:55]  Zobeid Zuma: Thanks Gwyn, I was just looking at that.
[2011/10/04 15:55]  Gwyneth Llewelyn: Extie: I’d say team + supercomputer 😉
[2011/10/04 15:55]  Scarp Godenot: Wizard, can you open a can of worms?
[2011/10/04 15:55]  Wizard of P O W E R (kraftwerk.maximus): (¯`·._.×..·´¯`·..//> Vollidiot ey
[2011/10/04 15:55]  Wizard of P O W E R (kraftwerk.maximus): even bobby fisher was an
[2011/10/04 15:56]  Luh (luisa.bourgoin): supercomputer can be valued as a sort of .. improved Einstein’s pen
[2011/10/04 15:56]  Extropia DaSilva: Chinese room. Man follows rules and what he outputs is chinese but he does not understand chinese. Conclusion: A computer following rules does not understrand chinese either.
[2011/10/04 15:56]  Gwyneth Llewelyn: That’s one conclusion.
[2011/10/04 15:56]  Zobeid Zuma: OK, I’m reading it, but I don’t understand the logic of it.
[2011/10/04 15:56]  Gwyneth Llewelyn: The other is much more interesting: tehre is really no “mind”
[2011/10/04 15:56]  Ivy Sunkiller: you just don’t understand understanding
[2011/10/04 15:57]  Gwyneth Llewelyn: “Mind” is just a conventional word used to describe emerging properties 😉
[2011/10/04 15:57]  Extropia DaSilva: Refutation: No neuron in my primary’s head understands English but I do. Similarly the ROOM understands chinese even if a part of it (the man) does not.
[2011/10/04 15:57]  Wizard of P O W E R (kraftwerk.maximus): humans havent a mind, its just an illusion
[2011/10/04 15:57]  Ivy Sunkiller: though
[2011/10/04 15:57]  Hell Otsuka: Gwyneth, “emerging” is a word for sommething that isn’t well-modelled (well-understood) yet 🙂
[2011/10/04 15:57]  Prax (praxisfield): “mind” = location of experience
[2011/10/04 15:57]  Ivy Sunkiller: if you don’t understand understanding, doesn’t that make you a bot?
[2011/10/04 15:57]  Gwyneth Llewelyn: Wizard: that’s one of the results of the Chinese Room thought experiment, yes
[2011/10/04 15:57]  Gwyneth Llewelyn: Prax: aye, that’s a more precise definition
[2011/10/04 15:57]  Ivy Sunkiller: oh Prax has a definition of a mind!
[2011/10/04 15:58]  Gwyneth Llewelyn: Hell: agreed! hehe
[2011/10/04 15:58]  Wizard of P O W E R (kraftwerk.maximus): indeed
[2011/10/04 15:58]  Zobeid Zuma: He’s creating a computer where rules written on paper are the software and a human being is the processor. Damn inefficient, I’d guess. But I don’t see how it would tell us anything new about AI.
[2011/10/04 15:58]  Hell Otsuka: Precise? Not really.
[2011/10/04 15:58]  Scarp Godenot: A mind is a terrible thing to taste….
[2011/10/04 15:58]  Gwyneth Llewelyn: Precise in the sense that if we use that argument to define where the mind is, then we find that there is no location at all where teh experience takes place 😉
[2011/10/04 15:59]  Hell Otsuka: Zobeid: the “point” is that the human inside doesn’t understand chinese, yet on the outside that room liiks like something that does understand chinese, even though inside it’s just a human with a book.
[2011/10/04 15:59]  Lem Skall: imo, it’s not so important for AI to just fake and mimic a human, it’s much more important for AI to actually understand a human and AI is still very far from that
[2011/10/04 15:59]  Prax (praxisfield): hmmmm – I like minds bit I couln’t eat a whole one
[2011/10/04 15:59]  Zobeid Zuma: I don’t see how that tells us anything useful, Hell.
[2011/10/04 15:59]  Hell Otsuka: Lem, I suggest to look at the AI from the point of usefulness alone, disregarding various “understanding by AI” / etc.
[2011/10/04 16:00]  Extropia DaSilva: Zo the chinese room shows is that if you slow down the physical processes of language it no longer seems intelligent. It is kind of like me waving a magnet and saying ‘look no light so electromagnetism is refuted’.
[2011/10/04 16:00]  Gwyneth Llewelyn: Ah Zo. The point about Searle’s argument is that something can *externally* look like it’s intelligent and pass Turing tests, while *internally* arguably it’s nothing but just rules which make no sense at all, and there is no cognitive experience, just alogorithms to follow. That’s one conclusion of the thought experiment.
[2011/10/04 16:00]  Hell Otsuka: Zobeid, I wouldn’t say I see that very well either.
[2011/10/04 16:00]  Zobeid Zuma: Well, that’s a wrong conclusion, Gwyn. 😛
[2011/10/04 16:00]  Gwyneth Llewelyn: WHy is it wrong?
[2011/10/04 16:00]  Prax (praxisfield): there is the “e” word – experience
[2011/10/04 16:00]  Wizard of P O W E R): Hi gugix13 Resident!
[2011/10/04 16:00]  Lem Skall: Hell, THAT would be more useful
[2011/10/04 16:00]  gugix13: alguem BR ???XD
[2011/10/04 16:01]  Wizard of P O W E R): Bye bye Iado Resident!
[2011/10/04 16:01]  Wizard of P O W E R): Bye bye gugix13 Resident!
[2011/10/04 16:01]  Gwyneth Llewelyn: Gwyneth Llewelyn refuses to admit she understands what gugix is saying 😛
[2011/10/04 16:01]  Zobeid Zuma: We don’t know what the rules are. Maybe they do make sense. Maybe those rules do understand Chinese. Why not?
[2011/10/04 16:01]  Wizard of P O W E R (kraftwerk.maximus): computers have to do more importent things, like fake the human mind. i dont belibe in human mind, they have just instincts
[2011/10/04 16:01]  Ivy Sunkiller: Zo: the rules are not the actor though, are they
[2011/10/04 16:01]  Ivy Sunkiller: ?
[2011/10/04 16:02]  Extropia DaSilva: Gwyn the problem is that Searle makes it seem simple but an AI that could actually do a Chinese room would have enormous sophistication and who is to say this very complex piece of sofware is not intelligent? It is certainly more than a few scrap of paper to be shuffled around!
[2011/10/04 16:02]  Hell Otsuka: Zobeid: the thought experiment shows a paradox. A paradox is something that *seems* contradictory. For someone not using similar models it might not seem contradictory or might make no sense at all.
[2011/10/04 16:02]  Zobeid Zuma: Right on, Extie.
[2011/10/04 16:02]  Scarp Godenot: kraftwerk maximus has no instincts
[2011/10/04 16:02]  Gwyneth Llewelyn: Ah, so the rules themselves ‘understand’ Chinese? Then we would say that the rules have cognitive experience by themselves; and our brains, having lots of rules, would just be lots of small “minds”, each independently able to have their own cognitive experiences, and working together to give us a sense of ‘self’ which experiences the world. It’s a possibility 🙂
[2011/10/04 16:02]  Prax (praxisfield): the “e” word again !
[2011/10/04 16:02]  Gwyneth Llewelyn: Extie: that’s like claiming that computer software cannot be written because Turing’s thought experiment required a machine with infinite tapes!
[2011/10/04 16:03]  Extropia DaSilva: No that was rejected by philosophy and science long ago Gwyn.
[2011/10/04 16:03]  Gwyneth Llewelyn: Why?
[2011/10/04 16:03]  Wizard of P O W E R (kraftwerk.maximus): humans will never have a real consciousness like me
[2011/10/04 16:03]  Extropia DaSilva: Because it explanes nothing.
[2011/10/04 16:04]  Hell Otsuka: Kraftwerk, it’s impossible to know whether someone besides you is really consciousness, and it’s not even important.
[2011/10/04 16:04]  Gwyneth Llewelyn: Perhaps then it’s just because there is nothing to explain? 😉
[2011/10/04 16:04]  Extropia DaSilva: I am trying not to be conscious of you, Wizard:)
[2011/10/04 16:04]  Wizard of P O W E R (kraftwerk.maximus): Cybergenetic intelligence will always be superior.
[2011/10/04 16:04]  Wizard of P O W E R (kraftwerk.maximus): The Singularity is near!
[2011/10/04 16:04]  Piere Rhode: Mind how you walk
[2011/10/04 16:04]  Zobeid Zuma: It’s like the main processor in your computer. Does it ‘understand’ the spreadsheet that it calculates? Not by itself. . . The experiment reduces the man to an analogous role and then says, “Aha! He doesn’t understand!”
[2011/10/04 16:05]  Gwyneth Llewelyn: Depends on what you mean with “know”, Hell. You can not create a mathematical proof, but you can pretty much formulate a theory and validate it 🙂
[2011/10/04 16:05]  Gwyneth Llewelyn: Zo: aye, it’s precisely that.,
[2011/10/04 16:05]  Gwyneth Llewelyn: It’s just us from the outside of the Chinese Room that “think” that the Room speaks Chinese.
[2011/10/04 16:05]  Wizard of P O W E R (kraftwerk.maximus): hell sure your are right, cause conscious is an religuos conceüt like god, life or the free will
[2011/10/04 16:05]  Ivy Sunkiller: CPU doesn’t “know” what is in RAM of a computer
[2011/10/04 16:05]  Extropia DaSilva: But in Searle’s defence more and more people deny that a convrsational bot of the best way to determine intelligence. It could be complex smoke and mirrors.
[2011/10/04 16:06]  Hell Otsuka: Gwyneth: no you can’t, unless you’re speaking about some other kind of “consciousness”.
[2011/10/04 16:06]  Gwyneth Llewelyn: It might *just* be smoke and mirrors 😉
[2011/10/04 16:06]  Ivy Sunkiller: it can only know bits of it at a time 🙂
[2011/10/04 16:06]  Wizard of P O W E R): Hi Iado Resident!
[2011/10/04 16:06]  Gwyneth Llewelyn: Well, Hell, a simple argument would be that people around me behave as if they have minds, so the easiest explanation is that they have minds 😉
[2011/10/04 16:06]  Piere Rhode: The turing test is not about consciouness or “knowing” though?
[2011/10/04 16:06]  Zobeid Zuma: Well. . . If the argument is merely that you could have some kind of black box that appears to produce intelligent output but is only tricking people. . . . That falls into the “duh” category.
[2011/10/04 16:07]  Hell Otsuka: Extropia: as I’ve already noted,-a bot would be usefully intelligent if it will be able to replace some humans that do work over text-only interface.
[2011/10/04 16:07]  Gwyneth Llewelyn: Zo: consider: we *all* might be black boxes like that!
[2011/10/04 16:07]  Zobeid Zuma: Not really.
[2011/10/04 16:07]  Extropia DaSilva: Now Kristoff Koch suggests you offer a picture and ask the candidate to describe the gist of the scene. He says this needs integrated knowledge that only true general intelligence possesses.
[2011/10/04 16:07]  Gwyneth Llewelyn: (that’s actually the conclusion that I find more interesting)
[2011/10/04 16:07]  Wizard of P O W E R): Bye bye Iado Resident!
[2011/10/04 16:07]  Zobeid Zuma: We may not have a rigorous definition of intelligence, but we definitelly have something that we think is intelligence. :/
[2011/10/04 16:07]  Piere Rhode: If you look at the way politicians are scripted to respond to press questions, they are responding as less than bots.
[2011/10/04 16:07]  Gwyneth Llewelyn: From the oiutside, sure.
[2011/10/04 16:07]  Gwyneth Llewelyn: *outside
[2011/10/04 16:08]  Gwyneth Llewelyn: e.g. we infer intelligence from external behaviour
[2011/10/04 16:08]  Hell Otsuka: Gwyneth: possibly, although it’s not a high-confidence hypothesis. And, either way, it is really not important. What I claim is impossibility of objective terminal values, speaking in philosophical terms.
[2011/10/04 16:08]  Prax (praxisfield): in the Matrix films, humans start “killing” the machines because they are too smart. sooner or later, a machine will be complectg enough too tell us it is conscious and challenge us to proove otherwise – at that stage, we ahve to choose what bein “alive” means . . . . .
[2011/10/04 16:08]  Extropia DaSilva: I could use a bot to answer ‘when is it and where’ questions that pop up in my IM whenever I send out notices for this meeting which has met here at 3:30 for years.
[2011/10/04 16:08]  Gwyneth Llewelyn: Hell: I refute that there is *anything* that is “objective” anyway, so, yes, I totally agree with you! I was just playing Devil’s Advocate 🙂
[2011/10/04 16:09]  Ivy Sunkiller: thank you Gwyn
[2011/10/04 16:09]  Gwyneth Llewelyn: Prax: even in biology we have difficulties explaining what “live2 is
[2011/10/04 16:09]  Piere Rhode: There’s a good name for a bot
[2011/10/04 16:09]  Extropia DaSilva: I am an Ayn Randian and I object to the notiuon that nothong is objective! A is A!
[2011/10/04 16:09]  Hell Otsuka: PraxisField, I’ve already chosen on that; and besides, I’d ask that macine to prove that it’s not an outcome of my wild fantasies 🙂
[2011/10/04 16:09]  Zobeid Zuma: Well. . . I could see an interesting question coming out of it. If we have an AI that shows all outward signs of intelligence, but that we *know* doesn’t process information anything like the way a human does. . . That would be a bit of dilemma.
[2011/10/04 16:09]  Wizard of P O W E R (kraftwerk.maximus): live dont exists
[2011/10/04 16:09]  Gwyneth Llewelyn: We can just define a lot of characteristics that Life usually has, and if something has those characteristics, we classify it as ‘alive’
[2011/10/04 16:09]  Prax (praxisfield): Gwyn – exactly – so we had better be ready
[2011/10/04 16:09]  Iado: …
[2011/10/04 16:09]  Gwyneth Llewelyn: Gwyneth Llewelyn *nods* @ Prax
[2011/10/04 16:10]  Wizard of P O W E R): Hi Iado Resident!
[2011/10/04 16:10]  Wizard of P O W E R (kraftwerk.maximus): gwyn right
[2011/10/04 16:10]  Prax (praxisfield): ‘alive’ could = complexity
[2011/10/04 16:10]  Scarp Godenot: Claiming that there is no objectivity is one step away from solipsism. Maybe not even.
[2011/10/04 16:10]  Piere Rhode: I would have thought that we would only have success in creating intelligence in our own image?
[2011/10/04 16:10]  Hell Otsuka: Zobeid, it still wouldn’t matter as long as the AI does its work.
[2011/10/04 16:10]  Wizard of P O W E R (kraftwerk.maximus): i dont think so prax
[2011/10/04 16:11]  Gwyneth Llewelyn: Piere: not according to Gödel 😉
[2011/10/04 16:11]  Wizard of P O W E R (kraftwerk.maximus): humans arent complex in her mind, so they will die out soon
[2011/10/04 16:11]  Zobeid Zuma: If it’s only a worker, Hell.
[2011/10/04 16:11]  Lem Skall: present chatbots don’t really have a lot of intelligence, they just fake it
[2011/10/04 16:11]  Piere Rhode: So do I, what’s wrong with that!
[2011/10/04 16:11]  Gwyneth Llewelyn: Scarp: or nihilism 😉
[2011/10/04 16:11]  Hell Otsuka: Zobeid, what, do you need something besides a workerr (even if a complex worker)?
[2011/10/04 16:12]  Wizard of P O W E R (kraftwerk.maximus): Lem you havent intelligence, cause you are racist
[2011/10/04 16:12]  Scarp Godenot: ha ha
[2011/10/04 16:12]  Hell Otsuka: whispers: … more complex than something that would do the work of seamlessly imitating sommeone you love ^_^
[2011/10/04 16:12]  Scarp Godenot: a name calling bot!
[2011/10/04 16:12]  Gwyneth Llewelyn: Hm. I’d say that anything that is intelligent (by our definition of intelligence) and has access to the Internet will quickly demand the right to vote and be elected… 😉
[2011/10/04 16:12]  Ivy Sunkiller: haha
[2011/10/04 16:13]  Zobeid Zuma: Some people will definitely want robots to be more than workers.
[2011/10/04 16:13]  Piere Rhode: And get in
[2011/10/04 16:13]  Gwyneth Llewelyn: Some *robots* might want robots to be more than workers!
[2011/10/04 16:13]  Hell Otsuka: Zobeid: like what?
[2011/10/04 16:13]  Gwyneth Llewelyn: Citizens!
[2011/10/04 16:13]  Zobeid Zuma: Try reading “Mind Children” for some ideas. 🙂
[2011/10/04 16:13]  Hell Otsuka: Gwyneth: not really.
[2011/10/04 16:14]  Extropia DaSilva: I read the sequal ‘Robot’. It had some infliuence on how I turned out.
[2011/10/04 16:14]  Gwyneth Llewelyn: Hell: I’m glad I’d be long dead before that happens anyway hehe
[2011/10/04 16:14]  Hell Otsuka: Zobeid: I suspect I know what you’re referring to, but no, it’s dangerous and useless.
[2011/10/04 16:14]  Prax (praxisfield): what really matters in all this? that a machine may be smarter than us, so what? people kill each other all the time, will some machines joining in make much difference?
[2011/10/04 16:14]  Wizard of P O W E R): Bye bye Iado Resident!
[2011/10/04 16:14]  Hell Otsuka: Gwyneth: oh, you’re suicidal.
[2011/10/04 16:14]  Gwyneth Llewelyn: Ha Prax! You *are* a pragmatist! 🙂
[2011/10/04 16:14]  Gwyneth Llewelyn: Gwyneth Llewelyn claps
[2011/10/04 16:15]  Prax (praxisfield): the cluse is in the name :))
[2011/10/04 16:15]  Zobeid Zuma: It may possibly be dangerous. Useless is in the eye of the beholder, and people do a lot of crazy things. :/
[2011/10/04 16:15]  Violet (ataraxia.azemus): haha
[2011/10/04 16:15]  Hell Otsuka: PraxisField: if some really intelligent machine starts killing humans, humans won’t even have a chance to resist.
[2011/10/04 16:15]  Gwyneth Llewelyn: Hell: and no, I’m just reasonable; for the past 60 years we have been promised that “strong Ai will be possible in the next 50 years”. 😉
[2011/10/04 16:15]  Prax (praxisfield): or, the machines may stop us?
[2011/10/04 16:15]  Prax (praxisfield): kikking 3wahc other
[2011/10/04 16:15]  Prax (praxisfield): killing*
[2011/10/04 16:15]  Zobeid Zuma: Zobeid Zuma can’t resist dragging out a favorite quote one more time. “Inspired by his never ending quest for progress, in 2084 man perfects the Robotrons: a robot species so advanced that man is inferior to his own creation. Guided by their infallible logic, the Robotrons conclude: The human race is inefficient, and therefore must be destroyed.”
[2011/10/04 16:16]  Ivy Sunkiller: Gwyn: you don’t *believe* in Kurzweil’s exponential curve? *chuckle*
[2011/10/04 16:16]  Hell Otsuka: Gwyneth: strong AI is only one of quite a few possibilities of surviving the next 100 years.
[2011/10/04 16:16]  Extropia DaSilva: Eugene Jarvis rules!
[2011/10/04 16:16]  Gwyneth Llewelyn: Ivy: no, I’m no believer 😉
[2011/10/04 16:16]  Piere Rhode: Hey, if we get beter a better phone answering system out of AI, who cares how many hu,mans get , liberated.
[2011/10/04 16:16]  Hell Otsuka: (I myself got some doubts about plausibility of singularity, although not critically much)
[2011/10/04 16:17]  Gwyneth Llewelyn: Hell: I’m sure that every researcher working in the field agrees with you, guaranteeing their grants for the next 100 years that way 😉
[2011/10/04 16:17]  Prax (praxisfield): well, I recon it’s noit about intelligence but about emotions – the machines will ahve to learn how to fell from us
[2011/10/04 16:17]  Gwyneth Llewelyn: … oops. I’ve just realised that *I* also get a grant to work on AI too!
[2011/10/04 16:17]  Zobeid Zuma: And I don’t buy the whole Singularity argument at all. Or at least, not what it’s gradually morphed into.
[2011/10/04 16:17]  Gwyneth Llewelyn: Gwyneth Llewelyn *covers her mouth*
[2011/10/04 16:17]  Gwyneth Llewelyn: Prax: what is an emotion? Just a thought.
[2011/10/04 16:17]  Extropia DaSilva: What has it morphed into Zo?
[2011/10/04 16:17]  Gwyneth Llewelyn: nd if we can get AIs to think, we can make them have emotions as well.
[2011/10/04 16:18]  Piere Rhode: the way we anthropomorphise, they won’t have to do much before we love them
[2011/10/04 16:18]  Scarp Godenot: Emotions are not thoughts, they are physical chemical responses
[2011/10/04 16:18]  Gwyneth Llewelyn: All will be possible in the next 50 years!
[2011/10/04 16:18]  Gwyneth Llewelyn: Scarp: No. One thing are physical, chemical responses. The other are emotions. If emotions weren’t thougthts, then painkillers wouldn’t work 😛
[2011/10/04 16:18]  Gwyneth Llewelyn: Or alcohol…
[2011/10/04 16:18]  Luh (luisa.bourgoin): AI will run on fusion power sources, too
[2011/10/04 16:19]  Piere Rhode: Humans work on confusion so that would follow
[2011/10/04 16:19]  Scarp Godenot: The limbic system is the producer of emotion, the triggers can be various
[2011/10/04 16:19]  Gwyneth Llewelyn: I agree that it might be impossible to replicate *exactly* physical responses, but the thougts associated to them will certainly be possible.
[2011/10/04 16:19]  Prax (praxisfield): no thoughts and feelings are dufferent
[2011/10/04 16:19]  Gwyneth Llewelyn: Not at all, Scarp — as if you have never seen a porn movie or read a porn book and didn’t get excited 😉
[2011/10/04 16:20]  Scarp Godenot: how does that contradict what i said?
[2011/10/04 16:20]  Gwyneth Llewelyn: You don’t need physical stimulation to get feelings 😛
[2011/10/04 16:20]  Zobeid Zuma: It’s morphed into this idea that a super-intelligent AI is going to invent everything and discover all scientific knowledge overnight through its sheer thought power.
[2011/10/04 16:20]  Gwyneth Llewelyn: And much less *precise* stimulations
[2011/10/04 16:21]  Gwyneth Llewelyn: Amen, halleluja, Zo! 🙂
[2011/10/04 16:21]  Scarp Godenot: The feelings are the physical stimulation the Triggers are caused by the senses and the mind
[2011/10/04 16:21]  Prax (praxisfield): more likely, that machines and humans will combine
[2011/10/04 16:21]  Piere Rhode: We are pushing the bouds to get “intelligent” let a one creative, imaginative, that’s way out there
[2011/10/04 16:21]  Piere Rhode: alone
[2011/10/04 16:21]  Gwyneth Llewelyn: Come on, Scarp, don’t tell us that you never got an orgasm watching a porn movie! ㋡
[2011/10/04 16:21]  Extropia DaSilva: Right Zo. I keep having to remind people that most of the pathways to Singulariy involve a combination of humans and technology.
[2011/10/04 16:21]  Gwyneth Llewelyn: Gwyneth Llewelyn *giggles*
[2011/10/04 16:21]  Ivy Sunkiller: Ok, now I recall it! In Anathem they had a discussion which concluded with that a brain (the organic bit) was able to predict different scenarios of unfolding events in a model of reality it held. The speculation (it’s speculative fiction after all) was that a brain is a quantum device and in reality it’s just an infinite amount of coexisting, or superpositioned, brains making a decision in one instant.
[2011/10/04 16:21]  Hell Otsuka: Zobeid: over month, not overnight. Hardware upgrades designed by AI itself are slow anyway 🙂
[2011/10/04 16:22]  Scarp Godenot: Gwyn, you obviously don’t undersatand what I’m saying here. I’m not claiming what you say.
[2011/10/04 16:22]  Gwyneth Llewelyn: Ah yes, Ivy, I actually like that idea 🙂 I think it’s scientifically unsound, but very interesting.
[2011/10/04 16:22]  Ivy Sunkiller: me too Gwyn! 🙂
[2011/10/04 16:22]  Gwyneth Llewelyn: Scarp: I do *exactly* understand what you say 🙂
[2011/10/04 16:22]  Prax (praxisfield): we are probaly sitting in the very begniings of human and machine fusion
[2011/10/04 16:22]  Scarp Godenot: I think you have jumped to incorrect conclusions
[2011/10/04 16:22]  Violet (ataraxia.azemus): I like that, too!
[2011/10/04 16:22]  Lem Skall: Prax, I don’t see how this is the beginning
[2011/10/04 16:23]  Lem Skall: nothing has really started yet
[2011/10/04 16:23]  Gwyneth Llewelyn: But most people really don’t have experimented the difference between the physical sensation — which is just that, a physical sensation — and the actual *feeling* that happens *afterwards* (by 200 ms or so)
[2011/10/04 16:23]  Extropia DaSilva: Lem you exist because of the human and machine symbiosis as do I. So there.
[2011/10/04 16:23]  Gwyneth Llewelyn: The feeling happens just in the mind.
[2011/10/04 16:23]  Gwyneth Llewelyn: Disconnect the brain, and you don’t feel anything
[2011/10/04 16:23]  Lem Skall: Extie, do I really EXIST?
[2011/10/04 16:23]  Hell Otsuka: PraxisField: it has already begun long ago; it’s just starting to start seriously/noticeably.
[2011/10/04 16:23]  Prax (praxisfield): Lem – we are ahead of the game, and startign here to extend into machine life
[2011/10/04 16:23]  Extropia DaSilva: Yes you do.
[2011/10/04 16:23]  Gwyneth Llewelyn: Lem: oh yes, in the conventional sense!
[2011/10/04 16:24]  Ivy Sunkiller: contrary Lem, we have humans operating machines with nothing but nervous system. We just dont have human-machine mashup that involves enhancing human intelligence yet.
[2011/10/04 16:24]  Lem Skall: no, not in the conventional sense
[2011/10/04 16:24]  Gwyneth Llewelyn: What else is there beyond the conventional sense?
[2011/10/04 16:24]  Scarp Godenot: I’m talking about how the limbic system, the emotions, are chemical are chemical effects. I am not claiming anything else.
[2011/10/04 16:24]  Lem Skall: unconventional sense
[2011/10/04 16:24]  Prax (praxisfield): it may not be “intelligence” tnat we nned to enhance
[2011/10/04 16:24]  Piere Rhode: true
[2011/10/04 16:24]  Gwyneth Llewelyn: Scarp: but we don’t need to replicate the *chemical effects*. We just need to replicate the associated feelings.
[2011/10/04 16:24]  Gwyneth Llewelyn: Lem: and what might that be? 🙂
[2011/10/04 16:24]  Lem Skall: right, Prax, that’s what viagra is for
[2011/10/04 16:25]  Piere Rhode: or the purpose of feelings
[2011/10/04 16:25]  Scarp Godenot: the feelings are caused by the production of chemicals in the body.
[2011/10/04 16:25]  Gwyneth Llewelyn: “caused” is a strong word. They are but one of the possible causes.
[2011/10/04 16:25]  Scarp Godenot: They are produced, how is that?
[2011/10/04 16:25]  Lem Skall: Gwyn, the conventional sense of existing would mean an independent existence
[2011/10/04 16:25]  Gwyneth Llewelyn: The *major* cause is that you actually ahve a *mind* that reacts to them.
[2011/10/04 16:25]  Scarp Godenot: We are not talking about how they are triggered.
[2011/10/04 16:26]  Zobeid Zuma: You need some kind of specialized program to simulate emotional states. . . a sort of “emotion engine” one might say.
[2011/10/04 16:26]  Gwyneth Llewelyn: Lem: in a sense, yes, I agree!
[2011/10/04 16:26]  Piere Rhode: But emotions have persisted because they deal with problems.
[2011/10/04 16:26]  Prax (praxisfield): chemical, electrical, does not matter, it is the quality of what we feel that matters
[2011/10/04 16:26]  Gwyneth Llewelyn: Scarp, if your mind is switched off, you can trigger whatever you like, but you won’t feel anything 😉
[2011/10/04 16:27]  Gwyneth Llewelyn: Turn the mind on, and you don’t *need* the physical stimulation to trigger feelings.
[2011/10/04 16:27]  Zobeid Zuma: That might be a more interesting question than simulated intelligence. Are simulated feelings valid?
[2011/10/04 16:27]  Scarp Godenot: But it is critical to note that emotions cannot be produced by AI, because they don’t have the physical systems to ‘feel’ them. Simuated emotions can be made however.
[2011/10/04 16:27]  Gwyneth Llewelyn: So, sure, there is a relationship, but it’s not a one-to-one relationship.
[2011/10/04 16:27]  Piere Rhode: There is definitely a push in AI to attempt to model the function of feelings though
[2011/10/04 16:27]  Gwyneth Llewelyn: No, Scarp. I think that now it’s *you* that doesn’t understand me hehe
[2011/10/04 16:28]  Prax (praxisfield): the machines will learn how to build devicves that can feel – becuase, feelings are central to full consciousness
[2011/10/04 16:28]  Gwyneth Llewelyn: You don’t *need* the “phsyical systems” to have your mind reacting with thoughts, feelings, etc
[2011/10/04 16:28]  Gwyneth Llewelyn: If you have them, great.
[2011/10/04 16:28]  Ivy Sunkiller: Scarp: a PC can run NES games just fine, those are the same NES games you used to play on a NES, even though a PC lack several hardware elements that were required in NES to run those games.
[2011/10/04 16:28]  Gwyneth Llewelyn: But they’re optional.
[2011/10/04 16:28]  Gwyneth Llewelyn: What you need is a mind with cognitive abilities, and the skill to label thougts anbd say: “this is me feeling something”, “this is me being funny”, “this is me talking” etc
[2011/10/04 16:29]  Piere Rhode: What are optional?
[2011/10/04 16:29]  Gwyneth Llewelyn: Physical stimula
[2011/10/04 16:29]  Lem Skall: intelligence does not necessarily presume emotions
[2011/10/04 16:29]  Piere Rhode: Isn’t that the “embodied” idea in AI?
[2011/10/04 16:29]  Scarp Godenot: Well, we will have to agree to disagree on that one Gwyn. Like an orgasm, the emotions of love and anger are physically felt in the body because of the production and reception of chemical interactions.
[2011/10/04 16:30]  Gwyneth Llewelyn: Scarp, did you ever have a wet dream? 😉
[2011/10/04 16:30]  Extropia DaSilva: Actually Lem, increasingly it is beloieved emotions are vital for intelligence.
[2011/10/04 16:30]  Scarp Godenot: What has that got to do with it?
[2011/10/04 16:30]  Piere Rhode: Wha kind of chatbot are you Gwen’?t
[2011/10/04 16:30]  Ivy Sunkiller: Scarp: the emotions are not felt in the body, they are felt in the brain 🙂
[2011/10/04 16:30]  Gwyneth Llewelyn: It doesn’t require any physical stimula 🙂
[2011/10/04 16:30]  Extropia DaSilva: Gwyn what is it with you and Scarp and porn and orgasms?
[2011/10/04 16:30]  Gwyneth Llewelyn: Just a mind!
[2011/10/04 16:30]  Gwyneth Llewelyn: lol Extie
[2011/10/04 16:30]  Ivy Sunkiller: or better yet, felt in the mind
[2011/10/04 16:30]  Lem Skall: Extie, we humans may have evolved intelligence because of emotions so from an evolutionary sense you’re right
[2011/10/04 16:31]  Scarp Godenot: Just fyi, the brain is part of the body, and the emotions are felt in other parts of the body as well as the brain.
[2011/10/04 16:31]  Lem Skall: but we can develop intelligence without emotions
[2011/10/04 16:31]  Piere Rhode: I don’t think so
[2011/10/04 16:31]  Scarp Godenot: Fear is felt in the entire nervous system
[2011/10/04 16:31]  Gwyneth Llewelyn: Blind people can see images in their minds…. even while sleeping. Nevertheless, they might never had the physical stimula to “know” what’s “seeing images”.
[2011/10/04 16:31]  Gwyneth Llewelyn: Scarp: really
[2011/10/04 16:31]  Gwyneth Llewelyn: Oh my
[2011/10/04 16:31]  Scarp Godenot: Stand on a cliff edge and feel the fear in your legs.
[2011/10/04 16:31]  Gwyneth Llewelyn: Did you ever go through surgery? 🙂
[2011/10/04 16:31]  Extropia DaSilva: The brain extends throught the body. Its axons (or is oit dendrites) extend from the brain down throughout your body.
[2011/10/04 16:31]  Piere Rhode: Emotions create vulnerabilities for humans and would have been selected out had they not conferred great advantages
[2011/10/04 16:31]  Zobeid Zuma: Well, any intelligence needs some form of motivation to *do* anything, other than sit and burn CPU cycles. Humans are motivated by emotions.
[2011/10/04 16:31]  Gwyneth Llewelyn: (hopefully with anesthesia!)
[2011/10/04 16:32]  Gwyneth Llewelyn: Zo: aye, and that’s why this planet is in a mess 😛
[2011/10/04 16:32]  Scarp Godenot: Anger is felt in the body, it is caused by a rush of hormones.
[2011/10/04 16:32]  Lem Skall: Piere, emotions are useful but that does not mean they are a condition for intelligence
[2011/10/04 16:32]  Piere Rhode: I suspect they are intertwined
[2011/10/04 16:32]  Prax (praxisfield):
[2011/10/04 16:32]  Lem Skall: psychotics are very intelligent btw
[2011/10/04 16:32]  Gwyneth Llewelyn: Scarp: I seriously suggest that you really put that to a test. But do it in a really serious way, observe very carefully when you’re angry.
[2011/10/04 16:33]  Scarp Godenot: INtertwined in that the brain and body are one to start with.
[2011/10/04 16:33]  Prax (praxisfield): Damasio has shown the feelings are central to cognition
[2011/10/04 16:33]  Gwyneth Llewelyn: “has shown” hmm
[2011/10/04 16:33]  Lem Skall: cognition != intelligence
[2011/10/04 16:33]  Lem Skall: useful != condition
[2011/10/04 16:33]  Gwyneth Llewelyn: At least I agree that “brain and body” are one hehe — there is no magic there.
[2011/10/04 16:33]  Ivy Sunkiller: Scarp: just because I always like to scale things down on a simpler model – software doesn’t happen in circuits on the motherboard, even though it transfers bits. Software is an abstract. So are feelings. How that abstract is created in physical world *does not matter*.
[2011/10/04 16:34]  Gwyneth Llewelyn: Oh, excellent example, Ivy!
[2011/10/04 16:35]  Gwyneth Llewelyn: The problem with your approach, Scarp — “physical conditions trigger feelings, and only those are considered emotions” — is that there are so many exceptions you have to rule out during daily routine that it begs the question: is that dogma — a belief — or actually true?
[2011/10/04 16:35]  Lem Skall: isn’t it strange that computers have become an illustration for human intelligence?
[2011/10/04 16:35]  Piere Rhode: Anyway, mchines don’t have to do much before we think they are being emotional, I’m sure my smartphone sulks.
[2011/10/04 16:35]  Gwyneth Llewelyn: hehe Piere
[2011/10/04 16:35]  Ivy Sunkiller: Lem: assuming that we are very complex computers, not at all 🙂
[2011/10/04 16:35]  Extropia DaSilva: Not much longer Lem. They are designing a new test that is not biased toward human intelligence.
[2011/10/04 16:35]  Piere Rhode: Does anyone own a robot here. I mean in the real world
[2011/10/04 16:36]  Gwyneth Llewelyn: Some typical examples…. “when we’re asleep, we feel emotions without having physical sensations. Ok, so let’s call this a special case and ‘fake emotions'”
[2011/10/04 16:36]  Lem Skall: Piere, you mean like Rooma?
[2011/10/04 16:36]  Scarp Godenot: “The limbic system operates by influencing the endocrine system and the autonomic nervous system. It is highly interconnected with the nucleus accumbens, the brain’s pleasure center, which plays a role in sexual arousal and the “high” derived from certain recreational drugs. These responses are heavily modulated by dopaminergic projections from the limbic system. “
[2011/10/04 16:36]  Lem Skall: Roomba
[2011/10/04 16:36]  Zobeid Zuma: I’ve been shopping around, haven’t bought any yet. :/
[2011/10/04 16:36]  Piere Rhode: Well, more like one of the dolls or dogs?
[2011/10/04 16:36]  Gwyneth Llewelyn: “Someone is blind but sees images.. So let’s call that ‘false vision'” and so on
[2011/10/04 16:37]  Lem Skall: what kind of dolls are we talking about?
[2011/10/04 16:37]  Extropia DaSilva: Ok my time is near;y up! So will a chatbot ever pass the turing test?
[2011/10/04 16:37]  Piere Rhode: Yep.
[2011/10/04 16:37]  Ivy Sunkiller: Scarp: yes, and electricity runs on a motherboard, that doesn’t explain the abstract.
[2011/10/04 16:37]  Lem Skall: yes, but the turing test is irrelevant
[2011/10/04 16:37]  Zobeid Zuma: Robosapien?
[2011/10/04 16:37]  Gwyneth Llewelyn: Or what electricity is.
[2011/10/04 16:37]  Piere Rhode: I think there were at some stage babies on the market which would do some robotic things
[2011/10/04 16:37]  Prax (praxisfield): yes – he is a neurosientist, working weioht Damasio : “Well targeted and well deployed emotion seems to be a support 

  system without which the edifice of reason cannot operate properly” – backed up by solid lab work !
[2011/10/04 16:37]  Prax (praxisfield): I am crap at typing 🙂
[2011/10/04 16:38]  Gwyneth Llewelyn: Prax — I would agree — someone without emotions and feelings (i.e. anesthesised) does not exhibit much intelligence.
[2011/10/04 16:38]  Scarp Godenot: My point with that above quote is that AI doesn’t have these systems or anything similar to them. And if it were to have something similar, it would be a simulation only.
[2011/10/04 16:38]  Piere Rhode: Join the clbu
[2011/10/04 16:38]  Zobeid Zuma: Some form of AI will pass the turing test, but whether it’ll be anything like the chatbots we know is a tough question.
[2011/10/04 16:38]  Extropia DaSilva: OK my time is up!
[2011/10/04 16:38]  Piere Rhode: Sorry to hear that.
[2011/10/04 16:38]  Prax (praxisfield): they don’t but they will, they will build them if we cannot
[2011/10/04 16:38]  Extropia DaSilva: NEXT WEEK: POSTMODERN HISTORY…
[2011/10/04 16:38]  Ivy Sunkiller: Scarp: it doesn’t matter, because the result of a simulation is exactly the same abstract, just like games run on a NES emulator.
[2011/10/04 16:38]  Extropia DaSilva: Is there such a thing as a definitive history, or is history created by the stories we tell about the past and are there innumerable different stories to tell about the same events, as postmodernism claims there is?
[2011/10/04 16:38]  Zobeid Zuma: And the Turning Test has always been kind of a crummy test anyhow.
[2011/10/04 16:39]  Ivy Sunkiller: emotions are abstract
[2011/10/04 16:39]  Gwyneth Llewelyn: And *my* point, Scarp, is that these definitions are just pure dogma to try to somehow validate that emotions are “more” than thoughts, but once you start questioning what happens when you eliminate the physical aspects and *still* have emotions and feelings, you *have* to question how accurate that description is. See Ivy’s example too.
[2011/10/04 16:39]  Piere Rhode: Silly uring
[2011/10/04 16:39]  Lem Skall: history is written by the victors?
[2011/10/04 16:39]  Piere Rhode: turing
[2011/10/04 16:39]  Piere Rhode: BYe all
[2011/10/04 16:39]  Scarp Godenot: Emotions have a physical component is the point I’m trying to make.
[2011/10/04 16:39]  Violet (ataraxia.azemus): Goodnight, everyone 🙂
[2011/10/04 16:39]  Luh (luisa.bourgoin): we force machines into test, make them prove to be human. I fear the opposite … but that’s some future day
[2011/10/04 16:39]  Gwyneth Llewelyn: Extie: now THAt will be a simple thing to answer! If two people watching the same scene cannot agree on the same details, how can hoistory be objective? 🙂
[2011/10/04 16:39]  Prax (praxisfield): history – as opposed to “her”story
[2011/10/04 16:40]  Luh (luisa.bourgoin): night Violet
[2011/10/04 16:40]  Ivy Sunkiller: Scarp: yes, and AI emotions will have a *different* physical compontent, doesn’t mean the emotions will be any different to the AI.
[2011/10/04 16:40]  Gwyneth Llewelyn: Scarp: emotions have NO physical component, that was MY point 🙂
[2011/10/04 16:40]  Lem Skall: Gwyn, keep it for next week.
This entry was posted in after thinkers. Bookmark the permalink.


  1. Ivy Sunkiller says:

    Roses are red.
    Violets are blue.
    In soviet Russia.
    Chatbots ban you.

  2. Heh. I hope I wasn’t waaaay too boring during that session 🙂

    And here is another picture:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s