Agents
Reflex agent
- See sth do sth → Choose action based on current perception
Reflex agent with state
- See sth do sth but can also model the world → know the current world state.
Goal-based agent
- Have more abstract definition of overall goal that drives more complex choice of situation-action pairs.
Modal-based agent
- Model the consequence of action choice
Planning agent
- Ask “what if”
- Decisions based on (hypothesized) consequences of actions
- Must have a model of how the world evolves in response to actions
- Must formulate a goal (test)
- Consider how the world WOULD BE → model sth based on what happen next
Utility-based agent
- Use utility fc to judge consequences → decision making utility to be rationally maximize result
To be fully rational, we need utility function.
Learning agent
- Start with a set of knowledge and capable of adding knowledge by itself
- Feedback loops to change future behvior
Task environment
Define:
- Performance measure
- Environment
- Actuators
- Sensors
Environment Types
- Observable: ability to observe the whole problem
- Deterministic: make a decision and know what exactly will happen
- Indeterministic: sometime wrong sometime right
- Episodoc: there is pattern or sth useful in the past that can learn and use in the future
- Static: problem wont change in the future (ex: a person standing at the same location vs a person moving around)
- Discrete
- Single agent
Questions
- What are 4 parts of agent interaction with environment model?
- actuators
- sensors
- brain
- environment
- What are the agent types? What is feature added to each agent type for increase in complexity and capability?
- Reflex agent: choose action based on current perception
- Reflex agent with state: can keep track of world state
- Goal agent: have more abstract definition of overall goal that drives more complex choice of situation-action pair
- Model agent: model the consequence of action choice
- Planning agent: ask "what if", consider how the world would be
- Utility agent: use utility function to judge consequences
- Learning agent: feedback loops to change future behaviour
- Are reflex agents rational? Are utility-based agents rational?
- Reflex agents: No because rationality is defined as doing thing to maximize result. It this case, reflex agents just react to the system without those considerational processes.
- Utility-based agent: Yes
- What does Learning mean in AI, what does Planning/Deduction?
- LEarning: acquire knowledge, adapt and improve performance
- Planning: create a strategy to achieve a goal
- Deduction: applying logical rules to derive conclusions from known facts
- What are the 4 task environment areas?
- Performance measure
- Environment
- Actuators
- Sensors
- What are the environment types? Which version of an environment type (yes/no) is simpler to solve?
- Observable: can observe the whole program
- Deterministic: can determine what happen if do sth
- Episodic: something in the past can be used/learnt for the future
- Static: sys doesnt change
- Discrete: non-continuous data
- Single agent