Tag Archives: Turing Test

on Turing



and now … getting back to The Turing Test

lately my thoughts return, once again, to Alan Turing and the infamous Turing Test he conducted pertaining to machine intelligence, or better stated, pertaining to our human perceptions, beliefs and gullibility surrounding the technologies we create

the original goal of The Turing Test was to test ‘a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human’

however, based on this goal in relation to the actual methodologies and approach used by Turing, we can see a certain strange perversion didn’t necessarily shed any direct light on or prove anything at all remotely about a machine’s ability to exhibit intelligent behavior but instead used a trick in the form of a hidden human agent cleverly disguised as the computing machine to simulate varying degrees of intelligence along the machine to human intelligence spectrum 

at this time in history we may want to re-examine how we measure for intelligence — both machine and human intelligence

our standards of human language — especially machine-mediated, near-human language of social communication through the screen — have sufficiently changed over the course of several decades

and we need to keep in mind that the standards of human language vary significantly as we examine our communications as delivered through different mediums — for instance, human language in classic literature varies tremendously from the way we text each other via SMS; email communications — in regards to content, purpose and language structure — differ from the way we converse through social media as well as the way we communicate face-to-face IRL { the TLA for ‘In Real Life’ }; and so on

what if we considered testing ‘a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human’ by utilizing different forms of human communication exchange?

for instance, what if we looked at machine intelligence through the lens of humor and laughter?


could a computing machine perform a successful set of stand up comedy in front of a live, human audience? can a computing robotic device actually make us laugh through comedy? not just by delivering jokes and schtick as written and honed through a human comedy author, mind you, but by cleverly crafting its own comedic material through whatever embedded intelligence-derived writing algorithms its programmed with, and then by delivering that material live, onstage?

and — from the other perspective — can our computing technologies in 2014 detect and respond to a human-delivered set of stand-up comedy through genuine laughter?

my questions here — just to clarify — challenge our current notions of machine intelligence by proposing we conduct the testing using actual machines, not just simulations of machine intelligence

let’s not fake it to make it here when it comes to our measure of intelligence — let’s avoid any sense of a mere simulation of intelligence by leveraging the state of our technologies as they exist today, ‘as is’

unfortunately i’m not the person to actually design, develop and build the actual technological objects needed to conduct these experiments — i’m just not technically proficient enough to produce an intelligent-enough robotic stand-up / humor / laughing machine to properly conduct the testing as i imagine it would need to be

but my hypothesis goes something like this:

the technologies we create will never be smart enough to deliver a successful set of stand-up comedy to a live, human audience — the content of the material would miss the mark and the delivery would be too awkward and off to get people to feel any amount of the sense of mirth needed to provoke genuine, human laughter

not only would a robotic stand-up act not produce laughter — in an even worse way such an act would most likely, instead, create an atmosphere of strangeness, this uncanny valley effect as defined and described by Masahiro Mori — the performance would feel downright creepy to people and would actually start to effect our human perceptions and our overall experience of the space and place of The Comedy Club as a familiar and funny scenario

i also do not believe our current technologies could be successfully programmed to behave in a smart enough manner onstage to improvise in the way a stand-up does on a nightly, performance-by-performance basis — a robot, for instance, might not be able to read the audience to gauge how they’re receiving the material, to see if they’re being funny enough to proceed with further material ‘as previously planned’ or to, perhaps, switch up to a different branch of jokes and storytelling based upon both an audience’s laughter and the general human feel of the room 

and then — when going in the other direction — when asking whether or not a robot or other computing machine could identify something as funny and then laugh in a natural, human way at the comedy or humor that typical inspires our human laughter — this, to me, is a no brainer

there’s just no way

in fact, we would be tripping into the same uncanny valley every step of the way — it might, perhaps, even be a far deeper fall into an unfathomable abyss of uncanniness

we can look to the dinner scene from Kubrick and Spielberg’s film A.I. Artificial Intelligence as the speculative example of what might actually happen when a robotic being encounters a humorous situation


as a robotic boy, David sits at the dinner table with his new adoptive parents — as his parents Monica and Henry Swinton eat and drink their meal, David imitates the act of eating and drinking since he himself does not actually need food-based sustenance to live — after some silent tension at the dinner table, David breaks out into a crazy fit of laughter that quite literally scares the shit out of the Swintons — and then, following the initial scary outburst of laughter, the entire family exchanges more laughter around the dinner table

the entire scene makes no sense at all from a purely human perspective, but we see how a robot might misinterpret the tension in the room as potential humor to laugh about — here, for some strange reason, the laughter of the robotic boy, succeeds in making his human parents laugh — his laughter somehow becomes contagious for them, infectious, and they join in — but even then, the shared social sense of human laughter still contains a sour uncanniness, there is still some tense pressure residing in the room


so, on that note:

is there anyone that might be up to the challenge? who here on the interwebz can build a robot or other technology that could potentially make a human audience genuinely laugh?

and, could you also build out its amazing technologically-based counterpart as well? a robot that can detect and actually laugh in a human, natural way to human-delivered stand-up comedy?

do you submit to my challenge?

do you even dare?

a return to Turing ::..


i just took a look back at my very first post to this blog, and there are some rather interesting aspects to it that seem to hit right at the core of what i’m most concerned about as a transitional

looks like i wrote it at the very beginning of my graduate research and work through Dynamic Media Institute @ Massachusetts College of Art — it must’ve been one of the first weeks of class and we were looking at The Turing Test as part of our weekly readings and in-class discussions

Wikipedia describes Alan Mathison Turing as:

a British mathematician, logiciancryptanalyst, philosopher, computer scientist, mathematical biologist, and marathon and ultra distance runner

and as one of the founding fathers of computer science he seemed to be a natural-born philosopher, intrigued with not only the vast power and potentials our computing machines would afford to all of humanity, but also, interestingly enough, really beautifully aware of the intrinsic ethical matters embedded directly in the extra-human capabilities of these wonderous new machines

alan turing

With his test, Turing asks us:

Can we create a machine with interaction capabilities that would trick us into thinking it is, in fact, human?

And a means to see if it were, in fact, possible to trick us, Alan set up a little trick through simulation — a psychological experiment, if you will, whereby he simulated a conversational machine vis-à-vis a bit of theatric and nearly-prankish Allen Funtery

i’m not going to get into providing a full description of the actual Turing Test here in this blogPost, feel free to read some surface material about the test on Wikipedia — for me, the most important challenge i would like to present is the fundamental modernday irrelevance of The Turing Test

i’ve conducted a LOT of experiments over the years that implemented technology and even, more often than not, simulated technologies that i am simply not expert enough to program at this point in my coding abilities, and i can confidently report that the truth of the matter is that people are rather easy to trick — and they are not at all fascinated by what it might mean to be human anymore, which is a bit unfortunate and disappointing — but, instead, they are fascinated, almost mesmerized, by what we can accomplish with our technologies and they are willing to believe that even the most absurdly superhuman, unprogrammable interactions and intelligences can actually be designed, developed and embedded in the digital-machine experiences we create

now these are very vast and general oversimplified conclusions derived from some rather silly, gallery-based prototypes and experiments i’ve set up over the course of my curatorial career while working through DMI — but i’ve seen the dynamics between some very smart people and some very dumb prototypes and i seemed to always come away surprised, delighted and simultaneously disappointed in how easy it was to simulate and trick a gallerygoing audience into believing what they experienced was actually a computer application built on database, algorithm, interface, interaction, sensors and, ultimately, the magic of human ingenuity through programming

at this point i suggest we take a look at the flip-paradigm of The Turing Test, following the good example presented in this quote from a very famous, if not infamous, psychologist:

“The real problem is not whether machines think
but whether men do.”
B.F. Skinner

Hey, keep your hands off my chicken!

i don’t think we should care about tricking each other so much anymore — like i said, i think that’s rather easy to do most of the time — the biggest, best example of how easy it is to trick and even fully-influence people can be witnessed on a daily basis by simply turning on a television or listening to the radio and then witnessing how much personal opinion is shaped, formed and twisted by these outdated, mass media propaganda machines — even the internet, with all of its freer access to a broader set of information and opinions, tends to still feel a bit of information steering from the topics that come up on the first screen prior to driving the second screen in the modern American living room

i do not think that the original measures for the success of The Turing Trick still apply in this day and age — a recent report from The LA Times claims that a computer program actually did pass The Turing Test:

For the first time, a computer program has officially passed the Turing Test, which measures a machine’s ability to think for itself 

of course, the headline for this article indicates a bit of a trick behind the trick itself, right? the Times article, entitled, ‘Bot passes Turing Test; judges think it’s a 13-year-old boy,’ seems to vastly reduce the age of dialogic return in the simulated conversation to re-contextualize the ‘human’ aspect of the interaction — and so, at least from my perspective, the simulation need not feel as sophisticated and human as we once thought machine intelligence should be — in fact, the human age of the computer-conversant is now a teen that probably misspells words, if even using words in the English language at all, right? LOL … aight, TTYL ;]

but reducing the intelligence of the machine simulation of a human, this modern twist on The Turing Test seems to have increased the key performance indicator measures of success for the test itself

this raises some rather important and interesting questions

first off — what are we actually trying to test here? are we testing people? machines? our ability to program machines? our ability to trick people and their perceptions and beliefs about our intelligent machines?

second off — if we actually achieve the goals of the test — that is, if we can trick a person into thinking an interactive experience with a machine feels human { whatever that is } — what does that actually prove? how does that benefit actual people? or is that simply an implicit goal of computer scientists? to somehow trick people?

third off — isn’t it counterproductive to humanity and to human intelligence to have one of the ongoing side-project goals of computer science be based on a trick? as our machine intelligence supposedly grows, expands and extends on an exponential basis according to Moore’s Law, does it not somehow continue to sabotage and weaken actual, wetware, human intelligence?