[1] Summarised in the slogan "We can know more than we can tell", Polanyi's paradox is mainly to explain the cognitive phenomenon that there exist many tasks which we, human beings, understand intuitively how to perform but cannot verbalize their rules or procedures.
For example, native speakers tacitly acquire their language in early childhood without consciously studying specific grammar rules (explicit knowledge), but with extensive exposure to day-to-day communication.
[2] Daily activities based on tacit knowledge include recognizing a face, driving a car, riding a bike, writing a persuasive paragraph, developing a hypothesis to explain a poorly understood phenomenon.
Arguing against the then dominant Empiricists view that minds and experiences are reducible to sense data and collections of rules, he advocates a post-positivist approach that recognizes human knowledge is often beyond their explicit expression.
[9] Polanyi's observation has deep implications in the AI field since the paradox he identified that "our tacit knowledge of how the world works often exceeds our explicit understanding" accounts for many of the challenges for computerization and automation over the past five decades.
Despite the exponential growth in computational resources and the relentless pace of automation since the 1990s, Autor argues, Polanyi's paradox impedes modern algorithms to replace human labor in a range of skilled jobs.
[1] Although contemporary computer science strives for prevailing over Polanyi's paradox, the ever-changing, unstructured nature of some activities currently presents intimidating challenges for automation.
Instead of relying on programmer’s algorithms to instruct them in human knowledge, computer systems are now able to learn tacit rules from context, abundant data, and applied statistics on their own.
[13] AlphaGo program built by the Google subsidiary DeepMind is an example of how advances in AI have allowed mindless machines to perform very well in tasks based on tacit knowledge.
[3] On the other hand, as Carr argues, the assumption that computers need to be able to reproduce the tacit knowledge humans would apply to perform complicated tasks is itself open to doubt.
[14] Jerry Kaplan, a Silicon Valley entrepreneur and AI expert, also illustrates this point in his book Humans Need Not Apply by discussing four resources and capabilities required to accomplish any given task: awareness, energy, reasoning and means.
For example, data provided by a wide network of sensors enable AI to perceive various aspects of the environment and respond instantly in chaotic and complex real-world situations (i.e. awareness); orders and signals for actuating devices can be centralised and managed in server clusters or on the 'cloud' (reasoning).
[16] Moravec’s paradox claims that compared with sophisticated tasks demanding high-level reasoning, it is harder for computers to master low-level physical and cognitive skills that are natural and easy for humans to perform.
"[18] Corresponding to David Autor's discussion on jobs polarization, Pinker maintains that the appearance of the new generation's intelligent machines would place stock analysts, petrochemical engineers and parole board members in danger of being replaced.