Making Computers Useful

http://www.nwfusion.com/bg/wp/wpbydate.jsp

By Mark Gibbs

 

I have long been a fan of artificial intelligence, and I believe that when we finally get it, it will change the world - in an evolutionary rather than revolutionary manner. AI isn't likely to appear suddenly in a machine that can carry on a conversation that passes the Turing test with flying colors.

 

<digression> The Turing test was proposed by the British mathematician, Alan Mathison Turing, in his 1950 paper "Computing machinery and intelligence"  (http://www.abelard.org/turpap/turpap.htm). The University of Alberta Cognitive Science Dictionary summarized the test: "A judge has conversations [via teletype] with two systems, one human, the other a machine. The conversations can be about anything, and proceed for a set period of time [for example, an hour]. If, at the end of this time, the judge cannot distinguish the machine from the human on the basis of the conversation, then Turing argued (http://nww1.com/go/0617GIBRAD1A.html) that we would have to say that the machine was intelligent." </digression>

 

AI will occur in steps, with computers getting smarter about what we want them to do. It will start with programs that have what we'll call a better "understanding" of tasks we're trying to achieve.

 

At a simple level this would mean that when a user attempts to delete all files of a certain type, the AI component would question whether this was a good idea. At a more complex level, an AI assistant would know that when you make a one-hour appointment with Bob Smith it will involve a car trip. After checking road conditions and seeing it will take four hours and the appointment is at 5 p.m., the assistant will offer to book a hotel room for you.

 

As that kind of assistant facility is honed, we'll see those skills applied to corporations so that user behavior that is contrary to defined standards is questioned or blocked, and workflow is enhanced and streamlined. For example, customer service could be front-ended by an AI process that intelligently answers a significant number of inquiries.

 

The first step on the road to useful AI might have just happened. For the past 18 years, Douglas Lenat, CEO of Cycorp (http://www.cyc.com), has been building a system called Cyc (pronounced "psych").

 

Cyc is described as "an immense multicontextual knowledgebase and an efficient inference engine. The knowledgebase is built upon a core of more than 1 million hand-entered assertions (or rules) designed to capture a large portion of what we normally consider consensus knowledge about the world. For example, Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right side up."

 

Cycorp recently released OpenCyc, an open source version of the system, so people can extend Cyc's knowledgebase, using it royalty-free in noncommercial and commercial applications. One of its notable products is CycSecure, a product that ensures that all relevant patches have been applied to fix known vulnerabilities in networked computers.

 

How does Cycorp make a profit? It does consulting. It does R&D for the government and sells some products.

 

In a recent Computerworld story (http://nww1.com/go/0617GIBRAD1B.html), Lenat said CycSecure "knows what are normal, legitimate actions - such as a user renaming one of their own files or changing their password - and what are actions taken by hackers - such as packet-sniffing and spoofing. An attack plan generally includes a large number of 'normal' steps and one or more 'hacker' steps. Cyc does not have a model of the hacker mentality . . . but it does have the notion that hackers generally want to be undetected, since that motivation accounts for many steps in many plans which would otherwise be missed."

 

This could be the start of computers giving us a lot more than answers.