Social tuning theory describes the process whereby people adopt another person's attitudes or opinions regarding a particular subject.
The study of this occurrence began in 1902 when Charles Cooley coined the term "looking glass self", stating that people see themselves and their own social world through the eyes of others.
In 1934, Mead determined that not only do individuals shape their self-concepts according to the perspectives of others, but also that people's views of themselves are continually maintained according to these adopted ideas.
[1][2] In 2006, Sinclair and Huntsinger explored the idea of why other people will change their beliefs and attitudes in order to get along with others and feel accepted.
They used two hypotheses originally coined by Hardin & Conley in 2001, "Affiliative Social-Tuning" and "Domain Relevance Hypothesis".
[4] The second, "Domain Relevance Hypothesis", explains that "when confronted with multiple applicable views on which to construct a shared understanding with another person, an individual will choose to social tune toward only those views that will lead to the development of the most precise shared understanding with the person".
This study demonstrates that when individuals do not hold already strong beliefs, they are more likely to seek knowledge from those around them, and therefore more likely to engage in social tuning.
[5] Lun's experiment suggests the likelihood of social tuning when people seek knowledge on a particular subject.
In this case, the participants who did not hold strong opinions on the subject of prejudice, and thus presumably had less knowledge on the subject, molded their opinions to match the information they were given by the experimenter in the form of the word "ERACISM" on her shirt, and therefore they demonstrated stronger egalitarian views than they had when initially arriving at the experiment.
Lun's experiment reveals how social tuning is a part of such a process, in which people with less knowledge are more likely to mold their beliefs to that of others.
In the high-level condition, the experimenter was friendly and amiable; he offered candy at the beginning of the study and spoke enthusiastically about the experiment.
[6] Curtis Hardin, co-author of "Shared Reality, System Justification, and the Relational Basis of Ideological Beliefs", has performed numerous experiments in social tuning across a wide variety of ideals.
In a third study, people become more anti-black when they are included (as opposed to excluded) in a game played with ostensible racists.
As time passed, communicators' belief that their message as a source of information about the target increased and their memory was altered.
The result being that it is not only highly unlikely that a subject would openly disagree with these feelings, but actually adopt them and proclaim them as genuinely their own.
For example, Michael Inzlicht coined the term "threatening environments", which pertain to occasions when individuals perceive that they are being "devalued, stigmatized, or discriminated against" by a non-stereotyped group.
[1] On the other hand, research has showed that self tuning to ideas of one's ingroup, and not one's outgroup can often lead to more damaging results (7).
Therefore, a member that holds a negative self-stereotype of himself and his own group is more dangerous to his comrades than an individual on the outside who shares the same views.
[1] Research has been conducted on how an individual from a stereotyped group can best avoid the dangers of self-tuning from an out-group.
As Sinclair suggests, "members of stigmatized groups need to be careful with whom they develop relationships", and thus they "can reduce the likelihood of negative social tuning by remaining interpersonally distant from those with stereotypical views".