There's a terrific book, well worth scoring a copy, by M. Lamar Keene called The Psychic Mafia.
Keene tells the tale of Camp Chesterfield, a sort of outlet shopping mall of psychics, where marks—I mean patrons—would come to have their palms read, fortunes told — and to have sex with their dead husbands.
You heard me. Psychics would dress in chiffon and cheese cloth, and, supplied with data gleaned from cold and hot readings, would enter a darkened room with flickering candlelight, and proceed to mimic the spousal duties of their marks.
Hot readings are cheating. Cons peek in wallets, purses, and now on the Internet, and note relevant facts, such as addresses, birthdays, and various other bits of personal information. Cold readings are when the con probes the mark, trying many different lines of inquiry—"I see the letter 'M'"—which rely on the mark providing relevant feedback. "I had a pet duck when I was four named Missy?" "That's it! Missy misses you from Duck Heaven." "You can see!"
You might not believe it, but cold reading is shockingly effective. I have used it many times in practicing mentalism (mental magic), all under the guise of "scientific psychological theory." People want to believe in psychics, and they want to believe in science maybe even more.
What's funny is that even after you expose the psychic or scientist or scientific theory as a fraud, people still believe, not necessarily in the fraudster, but in the fraud. Keene dubbed this the "true-believer syndrome".
What is it that compels a person, past all reason, to believe the unbelievable. How can an otherwise sane individual become so enamored of a fantasy, an imposture, that even after it's exposed in the bright light of day he still clings to it — indeed, clings to it all the harder?…No amount of logic can shatter a faith consciously based on a lie. [Source]
Now it's strange, but all this psychic business does tie to AI. AI as "Augmented Eternity", that is. (Thanks to Victor Domin for the tip.)
According to Hossein Rahnama AI as AE will let "you create a digital persona that can interact with people on your behalf after you're dead."
Way it's going to work is that Rahnama is going to scrape a dead person's digital life off the Internet—do some hot reading, that is—and store it all up as strings. Then, through the magic of AI—statistical algorithms and "if" statements—AE will spit it back out at survivors, who will provide feedback about its accuracy—cold reading. But they'll call it "contextual clues."
The contextual part was something Rahnama found useful when he started Augmented Eternity. If you're going to construct a digital self, it's not enough to know that somebody said something. You have to know the context in which it was said–was the person joking? Annoyed? Reacting to today's news? These same kinds of clues end up being crucial when piecing together a digital personality, which is why the Augmented Eternity platform takes data from multiple sources—Facebook, Twitter, messaging apps, and others—and analyzes it for context, emotional content, and semantics.
This is being taken seriously serious. "In a paper published in Nature Human Behavior earlier this year, ethicists Carl Ohman and Luciano Floridi from the Oxford Internet Institute argue that we need an ethical framework for the burgeoning digital afterlife industry."
If you want to be a hit at your next party, drop the phrase "burgeoning digital afterlife industry."
Now there is nothing new under the sun, and so this is all the Turing Test, set to the tune of money. It's easy to fool people they're not talking to a person when it's minimal conservation like directing calls. But you won't be able to do it in a free-ranging conversation. like you used to have with your spouse or father. This was the opinion of Mortimer Adler: a true Turing Test would never be passed.
You won't be able to do it, that is, unless people want desperately to believe, as Keene has shown us. Of course, what people report believing and what they actually believe sometimes diverge. So it will be appropriate to down-weight initial reports of AE's success—or the success of any AI routine at mimicking complex situations. Making a fake face is easy. Making one that can pull of the trick of seeming completely human is a whole other order, triply so if you know who that face used to be.
from Climate Change Skeptic Blogs via hj on Inoreader http://bit.ly/2TV9mOd
No comments:
Post a Comment