This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human–computer interaction.
In these programs, the computer is no longer an agent of its user, but instead, a coordinator aggregating efforts of many human evaluators.
A concept of the automatic Turing test pioneered by Moni Naor (1996)[13] is another precursor of human-based computation.
Finally, Human-based genetic algorithm (HBGA)[14] encourages human participation in multiple different roles.
Thus, HBGA obviated the need for a fixed representational scheme that was a limiting factor of both standard and interactive EC.
[16] These algorithms can also be viewed as novel forms of social organization coordinated by a computer, according to Alex Kosorukoff and David Goldberg.
Kosorukoff (2000) proposed a way to describe division of labor in computation, that groups human-based methods into three classes.
The classification is in terms of the roles (innovation or selection) performed in each case by humans and computational processes.
Classes of human-based computation from this table can be referred by two-letter abbreviations: HC, CH, HH.
Most people, unlike mechanical parts, find it difficult to adapt to some fixed roles that best fit the organization.
Responding to workers is difficult to scale to the employment levels enabled by human computation microwork platforms.
Specific practical applications include: Human-based computation has been criticized as exploitative and deceptive with the potential to undermine collective action.
[47] The philosopher Rainer Mühlhoff distinguishes five different types of "machinic capture" of human microwork in "hybrid human-computer networks": (1) gamification, (2) "trapping and tracking" (e.g. CAPTCHAs or click-tracking in Google search), (3) social exploitation (e.g. tagging faces on Facebook), (4) information mining and (5) click-work (such as on Amazon Mechanical Turk).