Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-zzh7m Total loading time: 0 Render date: 2024-04-25T13:37:21.275Z Has data issue: false hasContentIssue false

8 - 2012 Tests – Bletchley Park

from PART TWO

Published online by Cambridge University Press:  12 October 2016

Kevin Warwick
Affiliation:
Coventry University
Huma Shah
Affiliation:
Coventry University
Get access

Summary

Between the end of the October 2008 experiment at Reading Universityand a special event at Bletchley Park in June 2012, an exciting and historic development took place in the continuing man-versus-machine narrative.

IBM once again produced a machine that beat human champions at their own game, following Deep Blue's defeat of Garry Kasparov.

Back in the late 1990s the analysis of Deep Blue's performance was that it used brute force to look ahead millions of chess moves, but it lacked intelligence. Recall that Turing (1948) had stated “research into intelligence of machinery will probably be very greatly concerned with searches. Is ‘search’ not part of our daily decision-making, even if done in an instant, to decide what the next best move is, no matter what activities we are planning?”.

In February 2011 the Watson machine, named after the IBM's founder Thomas J. Watson, was seen on TV in the US and across the Internet playing a game that involved identifying the correct answer to a clue. In the TV show, IBM presented another ‘super’ machine (see Figure 8.1), the Watson system (Ferrucci et al., 2010). This time, rather than have a machine compete with a human in a chess match, IBM chose a contest featuring natural language: the American general knowledge quiz show Jeopardy! (Baker, 2011).

The IBM team had conceded3 that this was a formidable challenge:

Understanding natural language, what we humans use to communicate with one another every day, is a notoriously difficult challenge for computers. Language to us is simple and intuitive and ambiguity is often a source of humor and not frustration.

Designing the Watson system around a deep search question–answer strategy, the IBM team were fully aware that:

As we humans process language, we pare down alternatives using our incredible abilities to reason based on our knowledge. We also use any context couching the language to further promote certain understandings. These things allow us to deal with the implicit, highly contextual, ambiguous and often imprecise nature of language.

The machine successfully challenged two Jeopardy! masters, Ken Jennings and Brad Rutter in a Final Jeopardy! general knowledge, human-versusmachine, exhibition contest.

Type
Chapter
Information
Turing's Imitation Game
Conversations with the Unknown
, pp. 128 - 158
Publisher: Cambridge University Press
Print publication year: 2016

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baker, S. (2011). Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt.
Berne, E. (1981). What Do You Say After You Say Hello?? Corgi.
Epstein, R., Roberts, G. and Beber, G. (eds) (2008). Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer.
Fallis, D. (2009). What is lying. J. Philos. 106 (1), 29–56.CrossRef
Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A.A., Adam Lally, A., Murdock, E., Nyberg, J.W., Prager, J., Schlaefer, N. and Welty, C. (2010). Building Watson: an overview of the deep Q/A project. AI Magazine http://www.aaai.org/Magazine/Watson/watson.php.
Hayes, P. and Ford, K. (1995). Turing test considered harmful. In Proc. Int. Joint Conf. on AI, Montreal, Volume 1, 972–977.Google Scholar
Meibauer, J. (2005). Lying and falsely implicating. J. Pragmatics 37, 1373–1399.CrossRefGoogle Scholar
Michie, D. (1999). Turing's test and conscious thought. In Machines and Thought – the Legacy of Alan Turing, P., Millican and A., Clark (eds). Oxford University Press, Volume 1, pp. 27–51.
Shah, H. (2011). Turing's misunderstood imitation game and IBM's Watson success. In Second Towards a Comprehensive Intelligence Test (TCIT), Reconsidering the Turing test for the 21st Century, Proc. of the AISB 2011 Convention, University of York, pp. 1–5.Google Scholar
Shah, H., and Henry, O. (2005) Confederate effect in human–machine textual interaction. In Proc. 5th WSEAS Int. Conf. on Information Science, Communications and Applications (WSEAS ISCA) Cancun, pp. 109–114.Google Scholar
Shah, H., and Warwick, K. (2010). Hidden interlocutor misidentification in practical Turing tests. Minds and Machines 20 (3), 441–454.CrossRef
Shah, H., Warwick, K., Bland, I., Chapman, C.D., and Allen, M.J. (2012) Turing's imitation game: role of error-making in intelligent thought. In Turing in Context II, Brussels, pp. 31–32. http://www.computingconference.ugent.be/file/ 14.
Shah, H., Warwick, K., Bland, I. and Chapman, C.D. (2014). Fundamental artificial intelligence: machine performance in practical Turing tests. In Proc. 6th Int. Conf. on Agents and Artificial Intelligence (ICAART2014). Angers Saint Laud, France.
Turing, A.M. (1948). Intelligent Machinery. Reprinted in The Essential Turing: The Ideas that Gave Birth to the Computer Age, B.J., Copeland (ed). Oxford University Press.
Warwick, K. (2011). Artificial Intelligence: The Basics. Routledge.
Warwick, K. (2012). Not another look at the Turing test! In Proc. SOFSEM 2012: Theory and Practice of Computer Science, M., Bielikova, G., Friedrich, G., Gottlob, S., Katzenbeisser and G., Turan (eds.). LNCS 7147, Springer, pp. 130–140
Warwick, K., and Shah, H. (2014a). Effects of lying in practical Turing tests. AI & Society doi: 10.1007/s00146-013-0534-3.CrossRef
Warwick, K. and Shah, H. (2014b). Assumption of knowledge and the Chinese room in Turing test interrogation. AI Comm. 27 (3), 275–283.Google Scholar
Warwick, K. and Shah, H. (2014c). The Turing test – a new appraisal. Int. J. Synthetic Emotions 5 (1), 31–45.Google Scholar
Warwick, K. and Shah, H. (2014d). Good machine performance in practical Turing tests. IEEE Trans. Computat. Intell. and AI in Games 6 (3), 289–299.Google Scholar
Warwick, K., Shah, H. and Moor, J.H. (2013). Some implications of a sample of practical Turing tests. Minds and Machines 23, 163–177.CrossRefGoogle Scholar
Warwick, K. and Shah, H. (2016). Passing the Turing test does not mean the end of humanity. Cognitive Computation 8 (3), 409–419.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×