Context awareness refers, in information and communication technologies, to a capability to take into account the situation of entities,[1] which may be users or devices, but are not limited to those.
Kaltz et al. (2005)[4] identified the categories user&role, process&task, location, time and device to cover a broad variety of mobile and web scenarios.
Some classical understanding of context in business processes is derived from the definition of AAA applications[5] with the following three categories: these three terms including additionally location and time as stated.
"[1] While the computer science community initially perceived the context as a matter of user location, as Dey discuss,[1] in the last few years this notion has been considered not simply as a state, but part of a process in which users are involved; thus, sophisticated and general context models have been proposed (see survey[8]), to support context-aware applications which use them to (a) adapt interfaces, (b) tailor the set of application-relevant data, (c) increase the precision of information retrieval, (d) discover services, (e) make the user interaction implicit, or (f) build smart environments.
Schmidt, Beigl and Gellersen[11] define human factors and physical environment as two important aspects relating to computer science.
Perera, Zaslavsky, Christen, and Georgakopoulos[14] have performed a comprehensive survey on context-aware computing from Internet of Things perspective by reviewing over 50 leading projects in the field.
Further, Perera has also surveyed a large number of industrial products in the existing IoT marketplace from context-aware computing perspective.
[15] Their survey is intended to serve as a guideline and a conceptual framework for context-aware product development and research in the IoT paradigm.
Likewise, context related to physical environment is structured into three categories: location (absolute position, relative position, co-location), infrastructure (surrounding resources for computation, communication, task performance), and physical conditions (noise, light, pressure, air quality).
Since the early 1990s, researchers have developed a large number of software and hardware systems that can collect contextual information (e.g., location, video feeds, away status messages) from users.
This information is then openly shared with other users, thereby improving their situational awareness, and allowing them to identify natural opportunities to interact with each other.
More recently, however, there has been a growing body of work that demonstrates how this technique can also be applied to groups of friends or family members to help keep them apprised of each other's activities.
In the Thunderwire system,[28] researchers developed an audio-only media space that allowed friends to share raw audio from their mobile devices' microphones.
In the Hubbub[31] and Conchat[32] systems, researchers developed a series of instant messenger applications that monitored when the user was active or idle using either software, or environmental sensors, respectively.
This visualization is then provided to the user's adult children so that they could "check up" on their parents without having to explicitly bother them, thereby allowing them to "age in place."
However, all attempts to support staff with such approaches are hampered till failure of acceptance with the need to look up upon a new event for patient identities, order lists and work schedules.
By blending of real and virtual elements and enabling users to physically interact with their surroundings during the play, people can become fully involved in and attain better gaming experience.
[35] Museums and archaeological sites sometimes provide multimedia mobile devices as an alternative to the conventional audio guide (see e.g. the Tate Modern in London.