AI, Class Notes

[Grad AI] Agents(cont.)

So we’re talking about rational agent but what sense does rational mean ?

Best? Yup to the best of the agent’s knowledge

= Optimal ? Yes. To the best of its ability

= OmniScience ? No. Impossible to know everything( drawing conclusion from knowledge is more valuable) b/c of hardware + physics constraint. The Last Question

= Clairvoyant? No. That’s one feat human needs, too. What if it predict wrong ?

= Successful? No. Not always.

In other words, rational = doing the right thing

In any moment of decision, the best thing you can do is the right thing …

-Theodore Roosevelt

=> rational action = actions that maximize expected value of performance measure with percept sequence to date provided

Looks funny
Rational agents. Scrape from professor slide

*Percept sequence to action mapping = f(p) => A

That is, given variable p(ercept) the function return an action.

Naturally, mapping is done with table look up( 2d array of p-a mapping ).  The problem with table look up is that of scale( memory issues ) when the percepts needed  is growing. O(p) = P(p) *P(a).

Another approach is using closed form, that is generating(automata) instead of enumerating(manual). This has roots in mathematics world where in a formula generates it own world and rules and abiding these rules, one could (virtually) do everything. For example : an agent would move by the formula : f(d) = d / 10 , d = distance from the agent to whatever is blocking it sensor. However, in the real world, not everything can be formular-lized. Learn to live with the trade off:

Cool dude
“le mieux est l’ennemi du bien”

With the coded behaviors of the agents there must be some way to measure its performance. And how about autonomousness? With its priori knowledge, can the agent learn how to adapt to the changing environment-that is with foundational knowledge, can it learn to behave when environment changes.

…Wait a minutes, how is agent different from software? agent is autonomous, intelligent(to some degree ), acting reactively and proactively, have social ability, cooperate with other agents, migrate from one system to another.

*Environment type

Accessible v.s inaccessible : whether agents are exposed to the complete state of environment

Deterministic v.s non-deterministic (stochastic – partially observable >> uncertain environment ): new state from current state possible ?

Episodic v.s sequential : unaffected later actions v.s dependable actions( chess )

Static ( not change – easy ) v.s dynamic (keep changing – hard ) : e.g  [chess , crossword]( static ) , taxi driving ( dynamic )

Friendly v.s Hostile :driving in desert( normal condition – no storm or anything ) v.s driving in Asia urban city.

Discrete v.s continuous : chess state ( discrete ) v.s taxi state( sequential )

That is EASY v.s HARD.

Steal from the professor slide again.
To reiterate with example

*Agent type

  • Reflex : select action based on current state, no memory => + states to better adapt( more data, better it behaves )
Reflexive
Reflex Agent. Source : AIMA.
  •  Goal-based : needs goal
I have a goal
Goal based agents. Source AIMA
  • Utility-based : stream-line goals by some measure/priority scheme , resolve conflicting goals if any
Utility Class
Utility Based Agent. Source: AIMA
  • Learning: adapt to new changes, learn from its mistake
Is this special ?
Learning Agent.Totally different from all the above. Source : AIMA

Take a look at an example of a vaccuum agent performing its duty in room A or B. We could have the following P = { A-Clean, A-Dirty, B-Clean, B-Dirty} , A = {GoToA, GotoB, Suck}.

Size of table lookup = (4^t)*3, t = lifetime of the agent.

The agent’s table looks something like:

{A-Clean}=>{GoToB}

{A-Dirty}=>{Suck}

{B-Clean}=>{GoToA}

{B-Dirty}=>{Suck}

….

It is a rational agent because at any percept sequence, its performance is maximized : only when room is dirty it sucks, otherwise keep checking other room{Simple Reflex agent}. This cannot avoid infinite loop when both rooms are clean in which the agent alternate forever between two rooms. If we assign cost 1 point for movement, we can check if this point is > 1, then we stop moving. So we need to keep track of this internal state{Internal state Reflex agent}. This is only for the case if the room cannot become dirty again. When it is so, we should let agent check the room after some times with the ability to learn about its mistake if the cost of going to other room outweigh the cost of staying ( that is it is false positive ). Also learning is good when the geography is not known….

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s