Artificial intelligence in Wikimedia projects

[10] Among other parts of the Detox project, the Wikimedia Foundation and Jigsaw collaborated to use artificial intelligence for basic research and to develop technical solutions[example needed] to address the problem.

[11][12] Various popular media outlets reported on the publication of this paper and described the social context of the research.

[25] Content in Wikimedia projects is useful as a dataset in advancing artificial intelligence research and applications.

[27] Subsets of the Wikipedia corpus are considered the largest well-curated data sets available for AI training.

[19] While Wikipedia's licensing policy lets anyone use its texts, including in modified forms, it does have the condition that credit is given, implying that using its contents in answers by AI models without clarifying the sourcing may violate its terms of use.

Machine translation software such as DeepL is used by contributors. [ 18 ] [ 19 ] [ 20 ] [ 21 ] More than 40% of Wikipedia's active editors are in English Wikipedia . [ 22 ]
Wikipedia articles can be read using AI voice technology.
Datasets of Wikipedia are widely used for training AI models. [ 26 ]