Intelligent agent


In artificial intelligence, an intelligent agent refers to an autonomous entity which acts, directing its activity towards achieving goals, upon an environment using observation through sensors and consequent actuators. Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
Intelligent agents are often described schematically as an abstract functional system similar to a computer program. Researchers such as consider goal-directed behavior to be the essence of intelligence; a normative agent can be labeled with a term borrowed from economics, "rational agent". In this rational-action paradigm, an AI possesses an internal "model" of its environment. This model encapsulates all the agent's beliefs about the world. The agent also has an "objective function" that encapsulates all the AI's goals. Such an agent is designed to create and execute whatever plan will, upon completion, maximize the expected value of the objective function. A reinforcement learning agent can have a "reward function" that allows the programmers to shape the AI's desired behavior, and an evolutionary algorithm's behavior is shaped by a "fitness function". Abstract descriptions of intelligent agents are sometimes called abstract intelligent agents to distinguish them from their real world implementations as computer systems, biological systems, or organizations. Some autonomous intelligent agents are designed to function in the absence of human intervention.
Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations.
Intelligent agents are also closely related to software agents. In computer science, an intelligent agent is a software agent that has some intelligence, for example, autonomous programs used for operator assistance or data mining are also called "intelligent agents".

Definitions and characteristics

According to Nikola Kasabov, IA systems should exhibit the following characteristics:
Padgham & Winikoff agree that an intelligent agent is situated in an environment and responds manner to environment changes. However, intelligent agents must also proactively pursue goals in a flexible and robust way. Optional desiderata include that the agent be rational, and that the agent be capable of belief-desire-intention analysis. Some 20th-century definitions characterize an agent as a program that aids a user or that acts on behalf of a user. The influential defines an agent as "anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators", and characterizes intelligence as the ability to succeed at acting in accordance with certain ideal standards for rationality.
"Intelligent agent" is also often used as a vague marketing term, sometimes synonymous with "virtual personal assistant".

Objective function

Some agents can be assigned an explicit "goal function"; an agent is considered more intelligent if it consistently takes actions that successfully maximize its programmed goal function. The "goal function" encapsulates all of the goals the agent is driven to act on; in the case of rational agents, the function also encapsulates the acceptable trade-offs between accomplishing conflicting goals. The theoretical and uncomputable AIXI design is a maximally intelligent agent in this paradigm; however, in the real world, AI is constrained by finite time and hardware resources, and scientists compete to produce algorithms that can achieve progressively higher scores on benchmark tests with real-world hardware.
Systems that are not traditionally considered agents, such as knowledge-representation systems, are sometimes subsumed into the paradigm by framing them as agents that have a goal of answering questions as accurately as possible; the concept of an "action" is here extended to encompass the "act" of giving an answer to a question. As an additional extension, mimicry-driven systems can be framed as agents who are optimizing a "goal function" based on how closely the AI succeeds in apeing the desired behavior. In the generative adversarial networks of the 2010s, an "encoder"/"generator" component attempts to mimic and improvise human text composition. The generator is attempting to maximize a function encapsulating how well it can fool an antagonistic "predictor"/"discriminator" component.
While GOFAI systems often accept an explicit goal function, the paradigm can also be applied to neural networks and to evolutionary computing. Reinforcement learning can generate intelligent agents that appear to act in ways intended to maximize a "reward function". Sometimes, rather than setting the reward function to be directly equal to the desired benchmark evaluation function, machine learning programmers will use reward shaping to initially give the machine rewards for incremental progress in learning. Yann LeCun stated in 2018 that "Most of the learning algorithms that people have come up with essentially consist of minimizing some objective function." AlphaZero chess had a simple objective function; each win counted as +1 point, and each loss counted as -1 point. An objective function for a self-driving car would have to be more complicated. Evolutionary computing can evolve intelligent agents that appear to act in ways intended to maximize a "fitness function" that influences how many descendants each agent is allowed to leave.

Structure of agents

A simple agent program can be defined mathematically as a function f which maps every possible percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or constant that affects eventual actions:
Agent function is an abstract concept as it could incorporate various principles of decision making like calculation of utility of individual options, deduction over logic rules, fuzzy logic, etc.
The program agent, instead, maps every possible percept to an action.
We use the term percept to refer to the agent's perceptional inputs at any given instant. In the following figures an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Architectures

defines four classes of agents:
Generally, an agent can be constructed by separating the body into the sensors and actuators, and so that it operates with a complex perception system that takes the description of the world as input for a controller and outputs commands to the actuator. However, a hierarchy of controller layers is often necessary to balance the immediate reaction desired for low-level tasks and the slow reasoning about complex, high-level goals.

Classes

group agents into five classes based on their degree of perceived intelligence and capability:
  1. simple reflex agents
  2. model-based reflex agents
  3. goal-based agents
  4. utility-based agents
  5. learning agents

    Simple reflex agents

Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the condition-action rule: "if condition, then action".
This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.
Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.

Model-based reflex agents

A model-based agent can handle partially observable environments. Its current state is stored inside the agent maintaining some kind of structure which describes the part of the world which cannot be seen. This knowledge about "how the world works" is called a model of the world, hence the name "model-based agent".
A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Percept history and impact of action on the environment can be determined by using internal model. It then chooses an action in the same way as reflex agent.
An agent may also use models to describe and predict the behaviors of other agents in the environment.

Goal-based agents

Goal-based agents further expand on the capabilities of the model-based agents, by using "goal" information. Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals.

Utility-based agents

Goal-based agents only distinguish between goal states and non-goal states. It is possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. The term utility can be used to describe how "happy" the agent is.
A rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes - that is, what the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning.

Learning agents

Learning has the advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the "learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions.
The learning element uses feedback from the "critic" on how the agent is doing and determines how the performance element should be modified to do better in the future.
The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions.
The last component of the learning agent is the "problem generator". It is responsible for suggesting actions that will lead to new and informative experiences.

Hierarchies of agents

To actively perform their functions, Intelligent Agents today are normally gathered in a hierarchical structure containing many “sub-agents”. Intelligent sub-agents process and perform lower level functions. Taken together, the intelligent agent and sub-agents create a complete system that can accomplish difficult tasks or goals with behaviors and responses that display a form of intelligence.

Applications

Intelligent agents are applied as automated online assistants, where they function to perceive the needs of customers in order to perform individualized customer service. Such an agent may basically consist of a dialog system, an avatar, as well an expert system to provide specific expertise to the user. They can also be used to optimize coordination of human groups online.

Inline references

Other references

*