Admissible heuristic


In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path.

Search algorithms

An admissible heuristic is used to estimate the cost of reaching the
goal state in an informed search algorithm. In order for a heuristic
to be admissible to the search problem, the estimated cost must always
be lower than or equal to the actual cost of reaching the goal state.
The search algorithm uses the admissible heuristic to find an estimated
optimal path to the goal state from the current node.
For example, in A* search the evaluation function is:
where
is calculated using the heuristic
function. With a non-admissible heuristic, the A* algorithm could
overlook the optimal solution to a search problem due to an
overestimation in.

Formulation

Construction

An admissible heuristic can be derived from a relaxed
version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods.

Examples

Two different examples of admissible heuristics apply to the fifteen puzzle problem:
The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at least the number of misplaced tiles. The cost to the goal is at least the Hamming distance of the puzzle.
The Manhattan distance of a puzzle is defined as:
Consider the puzzle below in which the player wishes to move each tile such that the numbers are ordered. The Manhattan distance is an admissible heuristic in this case because every tile will have to be moved at least the number of spots in between itself and its correct position.
43613081
7212393144
1531321454
24101111

The subscripts show the Manhattan distance for each tile. The total Manhattan distance for the shown puzzle is:

Optimality Guarantee

If an admissible heuristic is adapted in an algorithm, then this algorithm would eventually find an optimal solution to the goal. A well-known and easy-to-understand example is breadth-first search. In the case of BFS, the heuristic evaluation function equals to for each node, which is obviously less than the actual cost. This heuristic would cause the BFS algorithm to search literally all the possible paths and eventually find the optimal solution.
As an example of why admissibility can guarantee optimality, let's say we have costs as follows:
0 10 0 100 0
START ---- O ----- GOAL
| |
0| |100
| |
O ------- O ------ O
100 1 100 1 100
So clearly we'd start off visiting the top middle node, since the expected total cost, i.e., is. Then the goal would be a candidate, with equal to. Then we'd clearly pick the bottom nodes one after the other, followed by the updated goal, since they all have lower than the of the current goal, i.e. their is. So even though the goal was a candidate, we couldn't pick it because there were still better paths out there. This way, an admissible heuristic can ensure optimality.
However, note that although an admissible heuristic can guarantee final optimality, it's not necessarily efficient.