Computers are constantly getting better at generating images, including people’s photos. The development of artificial intelligence is increasing at a surprising pace. Recently, machine learning has become able to generate and modify images and videos. The face generated by artificial intelligence uses neural networks to create false images.
False faces
In 2014, machine learning researcher Ian Goodfellow presented the idea of generative opposite networks – GAN. The creators used GAN to create everything from graphics to dental crowns.
GAN consists of 2 neural networks, a generator and a discriminator that compete with each other to minimize or maximize a specific function. The discriminator retrieves real training data as well as generated (false) data from the generator and must derive a probability that each image is real. Its purpose is to maximize the number of correctly classified data it had been given, while the generator tries to minimize this number. It tries to classify the generated result as real, making the discriminator less correct.
A special type of GAN that has been built is a deeply convergent generative adversarial network (DCGAN), which generates counterfeits. The basic idea is that both networks are built using a convolutional layer structure in which object or kernel maps scan a specific part of the image in search of key features. This is done by multiplying the matrix, because each pixel in the image can be represented as a number. Pixel group values are multiplied by the matrix, allowing the network to understand important functions such as edges, shapes and lines that are present in this group.
GAN performance is often associated with realistic results. What started four years ago as tiny, blurred images in the shades of gray of human faces, turned into full-color portraits. The first GAN images were easily identifiable by people. Now they can be very difficult to tell apart.
Examples of faces generated by GAN, published in October 2017, are difficult to identify.
GAN technology has been modified by a well-known graphics and AI company – Nvidia, in order to create a high quality database of thousands of human photos, all generated by the computer. First, the network of generators learns a permanentl entery from the photograph of a real person. This face is used as a reference and encoded as a vector mapped to a latent space describing all the functions in the image. Researchers trained their computer on 70,000 photos of real people from Flickr, which included various age, ethnic and image groups.
Using these images as a basis, the computer was able to learn and segment aspects of different people – such as hair color, face shape or skin color – and generate completely new images. This technology is also able to detect and reproduce accessories such as glasses, sunglasses or caps and create an infinite number of images of completely new people.
Generic adversarial networks usually have to be trained to create one category of images at a time, such as faces or cars. BigGAN has been trained in a gigantic database containing 14 million different images from the Internet, covering thousands of categories, in an effort that requires hundreds of specialized machine learning processors. This wide visual world experience means that software can synthesize many different types of highly realistic images.
False movements
In 2018, scientists and artists transferred these created by AI and improved visualizations to another level. Scroll through these examples to see how software that can make images, video and art can power new forms of entertainment – as well as misinformation.
Software developed at UC Berkeley can transfer the movements of one person, motion captured, to another. The process starts with two source clips – one representing the movement to be transferred and the other representing the sample of the person to be transformed. One part of the software extracts body positions from both clips; another learns how to create a realistic picture of an object for any position of the body. It can then generate a video of the object that performs more or less any moves. In the initial version, the system needs 20 minutes of input video before it can map new movements to your body.
The end result is similar to the trick often used in Hollywood. Superheroes, aliens and monkeys in movies are animated by placing markers on the faces and bodies of the actors, so they can be tracked in 3D using special cameras. The Berkeley project suggests that machine learning algorithms can significantly increase the availability of these production values.
Seeing at night
Images with artificial intelligence support have become so practical that they can be carried in your pocket. The Night Sight feature on Pixel Google phones, launched in October 2018, uses a set of algorithmic tricks to change day to night. One of those tricks is to combine multiple images to create each final image; comparing them allows the software to identify and remove random noise, which is a bigger problem in low light. The clean composite image that is a result of this process is further improved with the help of machine learning. Google engineers have trained software to repair lighting and color images taken at night, using a collection of dark images combined with versions adjusted by photography experts.
Experts are alerting
As GANs become more and more sophisticated, US and British senior politicians are concerned about the threat posed by counterfeits to spread disinformation or cause conflict. Researchers are struggling to create new algorithms for detecting video and photo falsifications when technology becomes too sophisticated for the human eye.
Experts have been warning for several years how a trick with artificial intelligence can affect society. These tools can be used for disinformation and propaganda, and can undermine public confidence in pictorial evidence, which can harm both justice and politics. These warnings should not be ignored. First of all, the ability to generate faces has been given special attention in the AI community.
There are also severe limitations when it comes to knowledge and time. Researchers from Nvidia had to train their model for a week on eight Tesla GPUs to create these faces.
Fortunately, experts are also considering new ways to authenticate digital images. Some solutions have already been launched, for example camera applications that stamp photos using geocodes to verify when and where they were taken.