06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

fact, we would exp<strong>and</strong> nodes by judiciously selecting the more promising<br />

nodes, where these nodes are identified by measuring their strength compared<br />

to their competitive counterparts with the help of specialized intuitive<br />

functions, called heuristic functions.<br />

Heuristic search is generally employed for two distinct types of<br />

problems: i) forward reasoning <strong>and</strong> ii) backward reasoning. We have already<br />

discussed that in a forward reasoning problem we move towards the goal state<br />

from a pre-defined starting state, while in a backward reasoning problem, we<br />

move towards the starting state from the given goal. The former class of<br />

search algorithms, when realized with heuristic functions, is generally called<br />

heuristic Search for OR-graphs or the Best First search Algorithms. It may be<br />

noted that the best first search is a class of algorithms, <strong>and</strong> depending on the<br />

variation of the performance measuring function it is differently named. One<br />

typical member of this class is the algorithm A*. On the other h<strong>and</strong>, the<br />

heuristic backward reasoning algorithms are generally called AND-OR graph<br />

search algorithms <strong>and</strong> one ideal member of this class of algorithms is the<br />

AO* algorithm. We will start this section with the best first search algorithm.<br />

4.3.1 Heuristic Search for OR Graphs<br />

Most of the forward reasoning problems can be represented by an OR-graph,<br />

where a node in the graph denotes a problem state <strong>and</strong> an arc represents an<br />

application of a rule to a current state to cause transition of states. When a<br />

number of rules are applicable to a current state, we could select a better state<br />

among the children as the next state. We remember that in hill climbing, we<br />

ordered the promising initial states in a sequence <strong>and</strong> examined the state<br />

occupying the beginning of the list. If it was a goal, the algorithm was<br />

terminated. But, if it was not the goal, it was replaced by its offsprings in any<br />

order at the beginning of the list. The hill climbing algorithm thus is not free<br />

from depth first flavor. In the best first search algorithm to be devised<br />

shortly, we start with a promising state <strong>and</strong> generate all its offsprings. The<br />

performance (fitness) of each of the nodes is then examined <strong>and</strong> the most<br />

promising node, based on its fitness, is selected for expansion. The most<br />

promising node is then exp<strong>and</strong>ed <strong>and</strong> the fitness of all the newborn children is<br />

measured. Now, instead of selecting only from the generated children, all the<br />

nodes having no children are examined <strong>and</strong> the most promising of these<br />

fringe nodes is selected for expansion. Thus unlike hill climbing, the best<br />

first search provides a scope of corrections, in case a wrong step has been<br />

selected earlier. This is the prime advantage of the best first search algorithm<br />

over hill climbing. The best first search algorithm is formally presented<br />

below.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!