visit
On the other hand, sometimes these algorithms can be trapped. While machine learning algorithms can easily recognize an apple, if you put the same apple in a net bag, it doesn't understand what it is. Because this is an unexpected, unusual visual.In 2014, Ian Goodfellow and his colleagues at the University of Montreal in Canada developed a new machine learning system using game theory to turn this weakness into an advantage.
The coolest idea in machine learning in the last 20 years. -Facebook's artificial intelligence research managerDo you know why it is so important? Machine learning systems have provided simple output from a complex input. For instance, you are uploading photos in social media which are inputs. The ML algorithms produces a simple output using its neural networks. After analyzing, it produces output from these photos by detecting objects in photo. So, when you search something on internet, machine can easily find the keyword by filtering tons of data with produced tags.
The newly developed "Generative Adversarial Networks" can do the opposite. It can produce complex output from simple input. If you give the computer random numbers, it creates extremely complex and realistic photographs of human faces. So the machine not only learns, but also produces. They are learning by producing!There is a productive network and it starts to draw the picture immediately from random noise. But there is also another network which examines images created by the productive network.We can compare these two neural networks to the opponents in a game. There's a constant struggle between them. The producer's goal in this game is to trick the discriminator and to convince him that the image he produces is real. The aim of the discriminator is to extract as many fake images as possible by looking at real images.You can play with Generative Adversarial Networks (GANs) in your browser by clicking link below:
In 2014, when this system was developed, it produced a very low resolution with black and white colors. In last 5 years, the quality of photography that artificial intelligence can synthesize has gradually increased.
In these 5 years, machine learning researchers both developed and started to apply GAN techniques in many different fields. For example, computers can now produce cartoon characters and anime characters.The PokeGAN project designs new Pokemon characters, looking at old characters. The CycleGAN project turns draft drawings into photographs. It can learn the styles of painters and turn photographs into paintings or satellite photographs into maps. The StackGAN project turns text into visual. For example, you write ”a little bird with a short beak, black and blue" and it produces such a bird image. It is not searching it but actually producing it. In fact, these birds do not exist.What caused to such great progress in such a short time? Because it's not just about the advancement of technology.
Well, how we are producing so much data? The reason is very simple. For instance, FaceApp that ages people's photo. Only one such application allows hundreds of millions of photos a week to be uploaded.
Let's hope these are only used for machine learning. Because we accepted the terms and conditions without reading it to use the application , we give all the rights of those photographs to Yaroslav Goncharov who developed the application.FaceApp is just one of the applications that uses machine learning and GAN technique. Nowadays, producer neural networks perceive images as a set of styles. It can learn the pose, posture, hair, face shape, eyes, skin color of a person separately and produce new photos.
A Style-Based Generator Architecture for Generative Adversarial Networks
Before FaceApp, which was used by almost everyone, there was FakeApp that some people know. FakeApp can combine videos of celebrities with GAN techniques and make them do things they never did.
In order to produce Deepfake videos, the machines need to learn again with the GAN technique. Although the examples in link are impressive, they are actually in the process of combining two existing images.A few months ago, with a technique developed in Samsung's artificial intelligence laboratories, it became possible to produce video from a single photo.
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
Normally, we said that a lot of data is needed in machine learning. It is enough to give the computer one visual to learn. The computer combines this with basic facial gestures to produce a simple video.Probably in the near future we will be able to produce automatic videos from the selfies we take with our mobile phone. Or we can turn on our camera and play a photo like a puppet with our own facial expression.
That's all for now! Make sure you are following me on social media and if you find it useful please share with your friends. See you in next post soon Hackers!
Stay Connected!(Disclaimer: Reverse Python is scientific and technological research platform launched by the author)