In 2015, computer scientists Babak Saleh and Ahmed Egammal of Rutgers University used images from WikiArt in training an algorithm to look at paintings and detect the works’ genre, style and artist.
[5] Then, they designed a creative adversarial network (CAN), also trained on WikiArt dataset, to generate new works that does not fit known artistic styles.
[9] In 2019, Eva Cetinic, a researcher at the Rudjer Boskovic Institute in Croatia, and her colleagues, used images from WikiArt in training machine-learning algorithms to explore the relationship between the aesthetics, sentimental value, and memorability of fine art.
[10] In 2020, Panos Achlioptas, a researcher at Stanford University and his co-researchers collected 439,121 affective annotations involving emotional reactions and written explanations of those, for 81 thousand artworks of WikiArt.
Their study involved 6,377 human annotators and it resulted in the first neural-based speaker model that showed non-trivial Turing test performance in emotion-explanation tasks.