Beyond classical search

Table of contents

Optimization Problem

In many optimization problems, the path to the solution is irrelevant. In pure optimization problems, the best state is defined by the objective function. To represent such problems, a state-space landscape is used. It has a location (state) represented by x-axis and elevation(objective function value) represented by y-axis. The best state is hence the state with the highest objective value

The given diagram is a state-space representation of an objective function. You can click anywhere inside the box to reveal the elevation there. You are allowed 25 moves to find the highest peak before the hill is revealed.

represents found global maximas and represents unfound global maximas

Try to come up with a strategy that can be used for all kinds of hills.

Restart

Hill Climbing Search

In hill climbing search, the current node is replaced by the best neighbor. In this case, the objective function is represented by elevation, neighbors of a state are the states to the left and right of it and the best neigbor is the neigbor state with the highest elevation.

The represents global maximas and represents the states from where the hill climbing search can reach a global maxima.

Click on different states to start Hill Climbing Search from there and see which direction it prefers and when does it stop.

Use the restart button to try it on different kinds of hills.

Restart

Simulated Annealing

Simulated Annealing is a combination of Hill Climbing and Random Walk to gain more efficiency and completeness. In this procedure, instead of always moving to the best neighbor, a random neighbor is chosen. If the new state has better objective value than the current state, it is always chosen. If not, the algorithm accept the new state with a probability less than one. The probability of choosing a bad state depends on :

  1. The difference in objective value of the new state and the current state
  2. The temperature
The temperature T denotes how likely it is for a bad state to be accepted. The procedure starts with a high temperature and decreases it with time. The probability of accepting a bad move decreases exponentially with temperature.

The represents global maximas and represents current state.

Notice how the algorithm shakes violently between states when the temperature is high and then stabilizes to higher states as the temperature decreases.

Try to set the temperature from the slider to different values and notice how the algorithm behaves.

Restart
Temperature

Genetic Algorithm

Little critters change the color of their fur to match the background to camouflage themselves from predators.

Click on the canvas to generate next generation. Keep clicking to generate another progeny.

Note: Single point crossover might not be suitable for all applications

Searching with non-deterministic actions

In a world with non-deterministic actions, the result of an action is not known with complete certainty.

The erratic vacuum world

The erratic vacuum world is an extension of vacuum world from Chapter 2. In this world, the behavior of the cleaner is non-deterministic.

To define this behavior more formally, in this world, the Suck action works as follows

Given below is a 5 tile erratic vacuum world. You can record a sequence of actions which you believe will clean up all the dirt. Use the play the button to execute that sequence. Play the sequence multiple times to see how they can result in different end state sometimes. Use restart button to generate a new initial state. Use clear button to clear the recorded sequence and record a new one.

Restart
Left
Vacuum
Right
Play
Clear
Moves

Searching with Partial Observations

Now we come back to a world where the actions of the robot are deterministic again (no erratic behavior like before) but, the robot no longer has complete sense of its current state or its environment.

Vacuum World with no observation

In this world, the vacuum cleaner has no idea initially about its own location and the location of dirt in the world. Since the robot has no percept, it should be able to figure out a sequence of actions that will work despite its current state.

Given below are 8 random initial states. You can record a sequence of actions and see it in action just like before. Assume that illegal moves (like moving right in the right-most tile) have no effect on the world.

Try to find a sequence of actions that will lead to a final state (Clean all the dirt), no matter what the initial state of the world.

Restart
Left
Vacuum
Right
Play
Clear
Moves

Localization in a partially observable environment

Localization refers to the process of finding its own position in the world by a robot.

Find initial states.

In the given diagram, A robot is assigned a random position in the maze. Only the current percept of the robot is visible to you. Try to find all the positions in the maze where the robot can be.

Restart

Get to know percept.

Hover over the cells of the maze to see the percept the robot would give if it was there. Also, notice all the other cells which can have the same percept.

Restart

And-Or-Graph-Search

Online DFS Agent

Click to reset. Green tile is destination.

LRTA*-Agent