Since its inception in 2019 by a team of programmers, robotics, art experts and psychologists, the Android Ai-Da has never stopped talking. The robot is supposed to look and act like a woman and has a female voice. His name was created in homage to the British mathematician Ada Lovelace.
In two years, Ai-Da specifically produced abstract paintings based on complex mathematical models, and her first exhibition grossed more than a million dollars. She even allowed herself the luxury of giving her first TEDx talk. The humanoid is far from stopping here, but is exhibiting his self-portraits in the Design Museum in London until August 29th.
Ai-Da, the first robot artist, exhibits in London from https://t.co/vNtpPC55vy pic.twitter.com/OUdmnTix1u
– Paris Match (@ParisMatch) May 20, 2021
The Guardian teaches us that Ai-Da uses a combination of artificial intelligence and advanced robotics to achieve this feat. Her eyes serve as a camera and she observes herself before depicting herself in the painting.
AI gives art new perspectives
This exhibition is of cultural as well as philosophical interest. Indeed, for a long time we viewed art as a quality that is unique to humans. And now a machine is managing to reproduce high quality images using its own robotic arms. However, as many point out, Ai-Da could not exist without the people who programmed it.
Quoted by the BBC, Lucy Seal, one of the researchers behind this initiative, explains that this is exactly the purpose of creating this robot:
If Ai-Da is doing only one important thing, it would be to ponder the ambiguities surrounding the human-machine relationship. It encourages us to think more carefully and slowly about the decisions we will make for our future.
Note that artificial intelligence is regularly used in artistic projects. We recently told you about the idea of computer scientist Glenn Marshall, who used this technology to radically change the presentation of a poem. He therefore used the Story2Hallucination tool, which converts words into videos. Vo.codes took care of the narration and used them to create a deepfake based on the voice of actor Christopher Lee.