A problem comprises of
- initial state : where it starts out with
- goal test
=> Solution : operator sequence to achieve goal state from initial state.
- Single state ( deterministic, accessible) : agent knows everything abt the environment => easy to come up with optimal solution. E.g: chess
- Multiple state( deterministic, inaccessible) : partially know states => assume state during execution. E.g:
- Contingency( nondeterministic, inaccessible ) : search + execute
- Exploration : unexplored state space => learn abt the environment while exploring
And our friend vacuum cleaner
Each problem type spans a solution space and most of the times , we just have to search this space for the solution we need…That brings us back to:
Most of the times, search algorithm is related to tree or graph and searching is just expanding this tree branch/leaf by way of queuing function. Different way of expansion nodes results in different traversal/search algorithm. Two most common one are : Depth First Search(DFS)[ FIFO order ] and Breadth First Search(BFS)[ LIFO ].
Suppose our algorithm for traverse tree is line so
def search(s, g, q, qf): """s : start node, g: goal node, q: queue, qf: queuing-fn a.k.a expand_nodes( q, n) e.g : search('A', 'E', ['A'], lambda: expand_nodes( q, n ) ) """ if s == g: return True if not q: return False return search( n, g, qf( q, n ), qf )
def expand_nodes_dfs( q, n ): return n.children + q
def expand_nodes_bfs( q, n ): return q + n.children
Evaluating a search algorithm relies on 4 criteria:
- completeness : guaranteed a solution?
- time complexity : how long does it run. for tree its by the order of node
- space complexity : memory usage
- optimality : least cost ?
For search algorithm complexity( time & space ), we look at b ( max branching ), d ( depth of solution ), m ( max depth of solution )