Expert Systems

Expert systems are computer programs that mimic the special abilities of human experts. They use an accumulation of domain-specific knowledge: information collected through long experience with some particular problem-solving situation. Expert systems are constructed by taking this knowledge (typically from expert humans) and activating it with simple "if-then" rule systems. A beginner can then make decisions like an expert, simply by answering a series of simple questions.

An early, successful example of an expert system was the MYCIN system, used to help doctors diagnose medical conditions and find appropriate treatments. MYCIN asks about the patient's symptoms then guides the doctor through a series of diagnostic tests, ultimately recommending a medicine or treatment for the disorder.

What are expert systems? What are some successful uses of expert systems?

Expert systems running on computers have succeeded in a variety of domains. They have located oil deposits for oil companies. They guide stock purchases. They diagnosis car problems electronically. They are used in shopping malls to aid people matching make-up colors to hair or clothes.

Why did early dreams of making money on expert systems fail to come true?

When expert systems were a new concept, in the early 1980s, investment companies saw great commercial potential in them. Many new companies were formed, expecting to make a fortune by generating computer-based expertise. But the bubble burst. By the late 80s there was a "shake-out" and many of the companies formed earlier in the decade failed. Part of the problem was that expert systems performed erratically (see the discussion of "brittleness" below.) But another problem was that expert systems turned out to be relatively cheap to produce. That was bad news for people who wanted to make money selling them.

Here is one concrete example. A program for helping bank officers decide whether to grant mortgages in 1980 cost up to $100,000 and required a mainframe computer. By 1988 it cost about $100 and ran on any desktop computer. That was good for new mortgage lending companies, but it was not good for companies trying to make money-selling expert systems to banks (Moskowitz, 1988).

Why did some users become disillusioned with expert systems?

As expert systems became cheap and widely available, their shortcomings became more obvious, too. Many users tried them out briefly and became disillusioned with the amount of work required to make them work. After all, an expert system is supposed to mimic the knowledge base of a human expert, but a human expert spends thousands of hours accumulating knowledge. To imitate this expertise, somebody must spend a large amount of time entering rules into the computer, mostly of the form, "If you see X, do Y, but only if Z is true...."

The toughest part of setting up an expert system is to accumulate all the correct and most-needed knowledge and put it into the database. Humans must invest time and energy to do this, before the system will work. Moreover, these rules are different for each problem-solving domain. That is what "domain-specific knowledge" is all about: knowledge specific to one domain. So, in order to make an expert system work on a computer, one must first hire an expert to spell out the rules and exceptions to the rules, then spend a lot of time entering them into a database and debugging the program to eliminate errors and unpleasant surprises.

What does it mean to say that expert systems were "brittle"?

Expert humans with common sense must oversee even a commercially available expert system, fine-tuned for a specific domain. For example, no doctor in his right mind would blindly follow all the advice of the MYCIN system without thinking independently about whether the diagnosis "made sense" and was safe and appropriate for the patient. To do otherwise would be to risk malpractice suits. Similarly, a stockbroker would be foolish to follow every recommendation of a stock-trading program without exercising common sense. Expert systems are notorious for occasionally producing silly or absurd recommendations. Somebody has to be checking the performance of the system, to catch such problems.

Researchers have a word for the tendency of programs to function only in a limited, predictable context. Such behavior is called brittle. To be "brittle" means to work only as long as problems are set up in ways that are anticipated. A brittle system may "break" (produce incorrect results) as soon as odd or unanticipated situations are encountered. Expert systems are usually brittle until they have been field-tested in many different situations. That is the only way to detect unanticipated problems. So an expert system—even after having many thousands of rules entered into it—may require human experts to act as baby-sitters for many years while it is field tested, which defeats the whole idea of turning important decisions over to computers.

Write to Dr. Dewey at

Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.

Custom Search

Copyright © 2007 Russ Dewey