visit
What are we doing to keep our AI algorithms safe from malicious attacks?I like to think about this problem using an analogy from the late 90's and early 2000's. Back in the day when dynamic websites were new we used to code database queries with no consideration about security until MySQL injections became the de-facto way of gaining access into a system.We are on the brink of literally trusting our lives with AI algorithms for the first time in history. Think self-driving cars. Tesla extremely close of deploying full self-driving cars at a massive scale. I'm certain that statistically the Tesla self-driving car will drive safer than the average human, but is Tesla "escaping" their self-driving algorithms? Are there vulnerabilities in their systems that could be exploited by malicious attackers and cause a tragic accident?Researchers are hard at work to understand how different neural networks can be attacked and yield unexpected results. .In short: changing 1 pixel in an image can be enough to mis-lead a neural network and make it yield results with are completely off.
AI and Deep Learning is a relatively new discipline and is only starting to take off relatively recently.Moreover, there are still few use cases where it is paramount to guarantee the AI algorithms have no life-threatening vulnerabilities. But as AI takes over more and more tasks such as driving, flying, designing drugs to treat illnesses and so on, AI engineers will need to also learn the craft of, and be, cybersecurity experts.I want to emphasise that the responsibility of engineering safer AI algorithms cannot be delegated to an external cybersecurity firm. Only the engineers and researchers designing the algorithms have the intimate knowledge necessary to deeply understand what and why vulnerabilities exists and how to effectively and safely fix them. External cybersecurity companies may play a role of trying to "pen test" the algorithms, but ultimately it will be up to the engineers developing them to fix them. Naturally this can only happen when AI engineers master the craft of security applied to AI algorithms. If AI is a relatively new field, security applied on AI algorithms is even newer and hiring people with such expertise will be a massive challenge. But inevitably AI engineers will need to take security under consideration and proactively test their algorithms for possible malicious attacks and also become security experts.Companies and their leaders will also need to start to take this topic seriously. No organisation wants to create unsafe products to begin with, and surely no organisation wants to be on the news for how easily their AI systems were fooled or what fatal accident their algorithms caused.
So, if you're a company that designs AI algorithms which are applied to critical areas of people's lives: deploy a culture of safety and security inside your AI engineering teams.