Human–robot interaction

[3] Although initially robots in the human–robot interaction field required some human intervention to function, research has expanded this to the extent that fully autonomous systems are now far more common than in the early 2000s.

[5] Anthropomorphic robots (machines which imitate human body structure) are better described by the biomimetics field, but overlap with HRI in many research applications.

[7] Future research therefore covers a wide range of fields, much of which focuses on assistive robotics, robot-assisted search-and-rescue, and space exploration.

[8] Robots are artificial agents with capacities of perception and action in the physical world often referred by researchers as workspace.

Their use has been generalized in factories but nowadays they tend to be found in the most technologically advanced societies in such critical domains as search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, entertainment and hospital care.

For example, with the rapid rise of personal fabrication machines such as desktop 3D printers, laser cutters, etc., entering our homes, scenarios may arise where robots can collaboratively share control, co-ordinate and achieve tasks together.

For effective human – humanoid robot interaction[11] numerous communication skills[12] and related features should be implemented in the design of such artificial agents/systems.

Research on sensing components and software led by Microsoft provide useful results for extracting the human kinematics (see Kinect).

By combining the information inferred by proprioception, sensor and speech the human position and state (standing, seated).

For instance, neural-network architectures and learning algorithms that can be applied to various natural-language processing tasks including part-of-speech tagging, chunking, named-entity recognition, and semantic role labeling.

[13] Motion planning in dynamic environments is a challenge that can at the moment only be achieved for robots with 3 to 10 degrees of freedom.

[15] Further, the presence of a human operator was felt more strongly when tested with an android or humanoid telepresence robot than with normal video communication through a monitor.

A common approach to program social cues into robots is to first study human–human behaviors and then transfer the learning.

For example, the authors have examined the task of driving together by separating responsibilities of acceleration and braking i.e., one person is responsible for accelerating and the other for braking; the study revealed that pairs reached the same level of performance as individuals only when they received feedback about the timing of each other's actions.

[28] Most recently, researchers have studied a system that automatically distributes assembly tasks among co-located workers to improve co-ordination.

[30] [31][32][33][34] The application areas of human–robot interaction include robotic technologies that are used by humans for industry, medicine, and companionship, among other purposes.

[35] However, there are persistent concerns about the safety of human–robot collaboration, since industrial robots have the ability to move heavy objects and operate often dangerous and sharp tools, quickly and with force.

This type of robot would aid stroke survivors or individuals with neurological impairment to recover their hand and finger movements.

[37] Nursing robots are aimed to provide assistance to elderly people who may have faced a decline in physical and cognitive function, and, consequently, developed psychosocial issues.

[43] In addition, the outcome of the research could not demonstrate a consistent positive effect that could be considered as evidence-based practice (EBP) based on the clinical systematic evaluation.

[49] The collaboration between rovers, UAVs, and humans enables leveraging capabilities from all sides and optimizes task performance.

[50] Bartneck and Okada[51] suggest that a robotic user interface can be described by the following four properties: The International Conference on Future Applications of AI, Sensors, and Robotics in Society explore the state of the art research, highlighting the future challenges as well as the hidden potential behind the technologies.

The accepted contributions to this conference will be published annually in the special edition of the Journal of Future Robot Life.

Since that initial flurry of academic activity in this field the subject has grown significantly in breadth and worldwide interest.

After a gap until 2014 the conferences were renamed as the "International Congress on Love and Sex with Robots", which have previously taken place at the University of Madeira in 2014; in London in 2016 and 2017; and in Brussels in 2019.

The past few years have also witnessed a strong upsurge of interest by way of increased coverage of the subject in the print media, TV documentaries and feature films, as well as within the academic community.

This symposium is organized in collaboration with the Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.

Kismet can produce a range of facial expressions.
This is a Nao robot often used for HRI research
This Nao Robot is often used for HRI research as well as other HRI applications.
This is an example of industrial collaborative robot, Sawyer, on the factory floor working alongside humans.
Researchers from the University at Texas demonstrated a rehabilitation robot in helping hand movements.
Paro, a therapeutic robot intended for use in hospitals and nursing homes
This is an exhibition at the Science Museum, London that demonstrates robot toys for children with autism, in hopes for helping autistic children to pick up social cues from the facial expression. [ 42 ]
This drone is an example of UAV that could be used to locate a missing person in the mountain for example.
The project "Moonwalk" is aimed to simulate the manned mission to Mars and to test the robot-astronaut cooperation in an analogue environment.