Skip to main content

Agents

Reflex agent

  • See sth do sth → Choose action based on current perception

Reflex agent with state

  • See sth do sth but can also model the world → know the current world state.

Goal-based agent

  • Have more abstract definition of overall goal that drives more complex choice of situation-action pairs.
  • Model the consequence of action choice

Planning agent

  • Ask “what if”
  • Decisions based on (hypothesized) consequences of actions
  • Must have a model of how the world evolves in response to actions
  • Must formulate a goal (test)
  • Consider how the world WOULD BE → model sth based on what happen next

Utility-based agent

  • Use utility fc to judge consequences → decision making utility to be rationally maximize result

To be fully rational, we need utility function.

Learning agent

  • Start with a set of knowledge and capable of adding knowledge by itself
  • Feedback loops to change future behvior

Task environment

Define:

  1. Performance measure
  2. Environment
  3. Actuators
  4. Sensors

Environment Types

  1. Observable: ability to observe the whole problem
  2. Deterministic: make a decision and know what exactly will happen
  • Indeterministic: sometime wrong sometime right
  1. Episodoc: there is pattern or sth useful in the past that can learn and use in the future
  2. Static: problem wont change in the future (ex: a person standing at the same location vs a person moving around)
  3. Discrete
  4. Single agent

Questions

  1. What are 4 parts of agent interaction with environment model?
  • actuators
  • sensors
  • brain
  • environment
  1. What are the agent types? What is feature added to each agent type for increase in complexity and capability?
  • Reflex agent: choose action based on current perception
  • Reflex agent with state: can keep track of world state
  • Goal agent: have more abstract definition of overall goal that drives more complex choice of situation-action pair
  • Model agent: model the consequence of action choice
  • Planning agent: ask "what if", consider how the world would be
  • Utility agent: use utility function to judge consequences
  • Learning agent: feedback loops to change future behaviour
  1. Are reflex agents rational? Are utility-based agents rational?
  • Reflex agents: No because rationality is defined as doing thing to maximize result. It this case, reflex agents just react to the system without those considerational processes.
  • Utility-based agent: Yes
  1. What does Learning mean in AI, what does Planning/Deduction?
  • LEarning: acquire knowledge, adapt and improve performance
  • Planning: create a strategy to achieve a goal
  • Deduction: applying logical rules to derive conclusions from known facts
  1. What are the 4 task environment areas?
  • Performance measure
  • Environment
  • Actuators
  • Sensors
  1. What are the environment types? Which version of an environment type (yes/no) is simpler to solve?
  • Observable: can observe the whole program
  • Deterministic: can determine what happen if do sth
  • Episodic: something in the past can be used/learnt for the future
  • Static: sys doesnt change
  • Discrete: non-continuous data
  • Single agent