As technology improves, users will not be able to detect flaws from artificial intelligence machines and will need to use detection software to distinguish between real and fake people on social media platforms.
There is no doubt that technology is evolving in many ways, with many companies using robots that are taking on tasks that would normally be fulfilled by human beings. Artificial intelligence has been in full effect and studied upon for many decades, becoming a huge factor in computer science. According to Built In, “AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.”
With rapid progression, artificial intelligence has been used in powerful ways building software to beat humans in online chess games, using voice- powered assistants like Siri. Even with the program in its infancy, it has trickled its way into our personal and professional lives, whether we realize it or not. Many big corporations like Apple and Facebook are known to use this groundbreaking technology to improve user experience, but is it being utilized for the right reasons?
Now, there are many websites available that use fake profiles by providing photos of real people into an AI computer program and producing its own photos, while deciphering what photos are fake. These pictures are circulating on the internet and seen on social media and job listing sites. According to the New York Times Kashmir Hill and Jeremy White, “If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.”
Beyond showing up in these ways, people are using fake pictures to create a friendly facade to harass others and pretend they are someone they are not to create traffic on their profile page. Soon, online users will be seeing not only a few AI generated photos, but a collection of them circulating, leaving the question of how will we decipher what is real and fake.
According to Hill and White, “Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.”
Because artificial intelligence programs are not perfect, one would be able to see flaws if examined. These imperfections have been seen with people of color and those with accents. Hill and White wrote, “In 2015, an early image-detection system developed by Google labeled two Black people as “gorillas,” most likely because the system had been fed many more photos of gorillas than of people with dark skin.”
Anyone can spot mistakes from the software by looking close enough. A lot of times if the person wears glasses, the frames are different, their ear shape is inconsistent or the background of the picture is blurry, which indicates that the user is fake.
“When the tech first appeared in 2014, it was bad — it looked like the Sims,” said Camille François, a disinformation researcher whose job is to analyze manipulation of social networks. “It’s a reminder of how quickly the technology can evolve. Detection will only get harder over time.”
Though artificial face generation is a huge advancement, this might make social media experience very bothersome, especially when it comes to online dating sites.