Embodied cognitive science

Embodied cognitive science is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior.

It comprises three main methodologies: the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity; the formation of a common set of general principles of intelligent behavior; and the experimental use of robotic agents in controlled environments.

Traditional cognitive theory is based mainly around symbol manipulation, in which certain inputs are fed into a processing unit that produces an output.

For example, a human's sensory organs are its input devices, and the stimuli obtained from the external environment are fed into the nervous system which serves as the processing unit.

This is chiefly due to the problems presented by the Homunculus argument, which concluded that semantic meaning could not be derived from symbols without some kind of inner interpretation.

[6]: 340 The first aspect of embodied cognition examines the role of the physical body, particularly how its properties affect its ability to think.

Depth perception, for instance, can be better explained under the embodied approach due to the sheer complexity of the action.

Depth perception requires that the brain detect the disparate retinal images obtained by the distance of the two eyes.

Echoing functionalism (philosophy of mind), this point claims that mental states are individuated by their role in a much larger system.

Leaving one's car keys in a familiar place so they aren't missed for instance, or using landmarks to navigate in an unfamiliar city.

Thunnus, or tuna, long baffled conventional biologists with its incredible abilities to accelerate quickly and attain great speeds.

The bluefin tuna is able to take advantage of and exploit its local environment by finding naturally occurring currents to increase its speed.

The tuna also uses its own physical body for this end as well, by utilizing its tailfin to create the necessary vortices and pressure so it can accelerate and maintain high speeds.

An embodied approach makes it easier to see that in order for this robot to function, it must be able to exploit its system to the fullest.

That is, the robot's systems should be seen as having dynamic characteristics as opposed to the traditional view that it is merely a command center that just executes actions.

In other words, pure vision passively creates the external perceivable world so that the faculties of reason can be better used introspectively.

Clark points to animate vision as an example of embodiment, because it uses both biological and local environment cues to create an active intelligent process.

Therefore, vision should not be seen as a passive system but rather an active retrieval device that intelligently uses sensory information and local environmental cues to perform specific real-world actions.

Inspired by the work of the American psychologist James J. Gibson, this next example emphasizes the importance of action-relevant sensory information, bodily movement, and local environment cues.

On the contrary, the former method of computing the arc of the ball is linear as it follows a sequence of perception, calculation and performing action.

Ultimately Clark does not expressly claim this is certain but he does observe the affordance approach can explain adaptive response satisfactorily.

The most dramatic difference is that the principles are applicable only to situated robotic agents in the real world, a domain where traditional artificial intelligence showed the least promise.

Principle of cheap design and redundancy: Pfeifer realized that implicit assumptions made by engineers often substantially influence a control architecture's complexity.

[8]: 448  It has been suggested that the fusion and transfer of knowledge between modalities can be the basis of reducing the size of the sense data taken from the real world.

Principle of sensory-motor coordination: Ideally, internal mechanisms in an agent should give rise to things like memory and choice-making in an emergent fashion, rather than being prescriptively programmed from the beginning.

The motto is, build fewer assumptions into the agent's controller now, so that learning can be more robust and idiosyncratic in the future.

In other words, the extra complexity added to the brain of a simple robot will not create any discernible change in its behavior.