Generative artificial intelligence

[7][13][14] Generative AI has uses across a wide range of industries, including software development, healthcare, finance, entertainment, customer service,[15] sales and marketing,[16] art, writing,[17] fashion,[18] and product design.

[19] However, concerns have been raised about the potential misuse of generative AI such as cybercrime, the use of fake news or deepfakes to deceive or manipulate people, and the mass replacement of human jobs.

[22] Since its inception, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity.

[23] The concept of automated art dates back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music.

[29][30] The academic discipline of artificial intelligence was established at a research workshop held at Dartmouth College in 1956 and has experienced several waves of advancement and optimism in the decades since.

Beginning in the late 2000s, the emergence of deep learning drove progress and research in image classification, speech recognition, natural language processing and other tasks.

[41] In March 2020, 15.ai, created by an anonymous MIT researcher, was a free web application that could generate convincing character voices using minimal training data.

A team from Microsoft Research controversially argued that it "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

[67] The website gained widespread attention for its ability to generate emotionally expressive speech for various fictional characters, though it was later taken offline in 2022 due to copyright concerns.

For example, UniPi from Google Research uses prompts like "pick up blue bowl" or "wipe plate with yellow sponge" to control movements of a robot arm.

To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by NVIDIA and AMD or the Neural Engine included in Apple silicon products.

[95] Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as NVIDIA's H100) or AI accelerator chips (such as Google's TPU).

[100] Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work.

[101][102] In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the Biden administration in July 2023 to watermark AI-generated content.

[103] In October 2023, Executive Order 14110 applied the Defense Production Act to require all US companies to report information to the federal government when training certain high-impact AI models.

[104][105] In the European Union, the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such.

In a July 2023 briefing of the United Nations Security Council, Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale".

Fran Drescher, president of the Screen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the 2023 SAG-AFTRA strike.

While AI promises efficiency enhancements and skill acquisition, concerns about job displacement and biased recruiting processes persist among these groups, as outlined in surveys by Fast Company.

To leverage AI for a more equitable society, proactive steps encompass mitigating biases, advocating transparency, respecting privacy and consent, and embracing diverse teams and ethical considerations.

Strategies involve redirecting policy emphasis on regulation, inclusive design, and education's potential for personalized teaching to maximize benefits while minimizing harms.

For example, a language model might assume that doctors and judges are male, and that secretaries or nurses are female, if those biases are common in the training data.

[133] Deepfakes (a portmanteau of "deep learning" and "fake"[134]) are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks.

[145][146] In April 2024, a paper proposed to use blockchain (distributed ledger technology) to promote "transparency, verifiability, and decentralization in AI development and usage".

In 2020, former Google click fraud czar Shuman Ghosemajumder argued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information.

[162] Additionally, large language models and other forms of text-generation AI have been used to create fake reviews of e-commerce websites to boost ratings.

[185] According to Stanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs.

[194] In April 2023, the German tabloid Die Aktuelle published a fake AI-generated interview with former racing driver Michael Schumacher, who had not made any public appearances since 2013 after sustaining a brain injury in a skiing accident.

"[227] In February 2024, Google launched a program to pay small publishers to write three articles per day using a beta generative AI model.

[229][230][231][232][233][234][235][236] United States Senators Richard Blumenthal and Amy Klobuchar have expressed concern that generative AI could have a harmful impact on local news.

Impressionistic image of figures in a futuristic opera scene
Théâtre D'opéra Spatial , an image made using generative artificial intelligence
Above: An image classifier , an example of a neural network trained with a discriminative objective. Below: A text-to-image model , an example of a network trained with a generative objective.
AI generated images have become much more advanced.
Private investment in AI (pink) and generative AI (green).
Stable Diffusion, prompt a photograph of an astronaut riding a horse
AI-generated music from the Riffusion Inference Server, prompted with bossa nova with electric guitar
Video generated by Sora with prompt Borneo wildlife on the Kinabatangan River
Architecture of a generative AI agent
A picketer at the 2023 Writers Guild of America strike . While not a top priority, one of the WGA's 2023 requests was "regulations around the use of (generative) AI". [ 122 ]