paint-brush
Daniel Quoc Dung Huynh, An Entrepreneur Advocating for Open-Source Privacy-Friendly AI by@craiglebrau
616 reads
616 reads

Daniel Quoc Dung Huynh, An Entrepreneur Advocating for Open-Source Privacy-Friendly AI

by Craig LebrauNovember 16th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Daniel Quoc Dung Huynh, CEO and Co-founder of Mithril Security, is launching a startup to democratize privacy-by-design AI tools. Mithril is developing end-to-end-to end-end data protection over the entire lifecycle of AI, ensuring that all data is protected at all times. The startup is working with Microsoft on Privacy Enhancing Technologies (PETs) and BlindAI, a deployment solution to make AI models confidential in two lines of code.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Daniel Quoc Dung Huynh, An Entrepreneur Advocating for Open-Source Privacy-Friendly AI
Craig Lebrau HackerNoon profile picture

New AI systems have shown incredible improvements and can today tackle complex problems we thought would be unaddressed for decades. These range from AI to solve a 50-year-old problem, protein folding, which is now deployed to accelerate drug discovery, to the recent generative AI with to help developers complete their code automatically.

But those AI models often need huge datasets to be trained on, and with great datasets comes great responsibility. This is especially true in healthcare, finance, or biometrics, where data can become extremely confidential.

There is therefore a huge bottleneck for AI adoption, as innovative startups and labs try to get access to sensitive data to train AI models, but data owners, for example hospitals, refuse to share it for security and privacy reasons.

Daniel Quoc Dung Huynh, CEO and Co-founder of Mithril Security, saw these challenges to reconcile AI and privacy and set himself on a mission to democratize Confidential AI.

has been exposed to cutting-edge AI every day for the past 6 years. As a graduate student at , the best engineering school in France, Daniel studied deeply mathematics and AI. Thanks to his dual diploma with HEC, France’s best business school, he could examine the intersection between advanced applied mathematics and business.

As one of the few 1% of students from the working class at Ecole Polytechnique, Daniel also learned to hustle and develop a mindset resilient to failures, as the way was paved with obstacles for students who had no help to succeed in a highly competitive academic environment.

Equipped with those skills and mindset, when Daniel saw those privacy challenges in AI while working at Microsoft on Privacy Enhancing Technologies (PETs), he saw an opportunity to fundamentally change the way AI models are trained. 

PETs change the current privacy paradigm, as data shared with AI companies or in the Cloud is no longer exposed in clear to anyone else. Thanks to end-to-end encryption, users such as citizens or hospitals can benefit from state-of-the-art AI solutions hosted in the Cloud, without having to worry about data leakage or misuse.

Society has already started to adopt end-to-end encrypted solutions, as secure messaging apps like Signal or Whatsapp already provide their service without ever having access to users’ data in clear.

However, providing advanced encryption mechanisms to make AI training and deployment privacy-friendly is a technical challenge above simple secure messaging apps. While working at Microsoft, Daniel only saw obscure solutions that were hard to use, were barely able to run any interesting models and came with a massive overhead, often thousands of times slower. 

Nonetheless, a hardware-based technology called secure enclaves started to show promise as it could make complex workloads confidential while imposing a reasonable slowdown, often around 20%. Leveraging secure enclaves for AI is nevertheless extremely complex, as it requires security, AI, and low-level software engineering skills.

But if used correctly, secure-enclave-based AI solutions could make training and deployment of models privacy-friendly for users’ data, while being fast and easy to deploy by AI teams.

It’s with this vision in mind that Daniel cofounded Mithril Security in April 2021 with his COO, and his CTO, . The goal of this startup is simple, democratizing privacy-by-design AI tools to make data sharing and collaboration frictionless. 

To make sure their products are fast enough, and accessible while ensuring privacy, secure enclaves are at the core of their solutions. The different projects have been made open-source under an Apache 2.0 license to foster transparency, security, and make adoption as smooth as possible. 

, their first product, is a deployment solution to put in production AI models with privacy. Designed to make AI models confidential in two lines of code, BlindAI can be used to make , be leveraged to analyze or .

, their second product, is a training framework to allow data scientists to explore and extract insights on confidential data. It is able to answer the security and ease of use requirements to tackle multiple-party training, for instance to train an AI model to detect breast cancer using data from multiple hospitals. 

After just over a year, Mithril Security has grown to 15 employees, gained a healthy pool of customers, and raised $1.4M in its Pre-Seed funding round. It serves customers and users in different sectors, including healthcare, security, biometrics, and advertising, ensuring that data is protected at all times.

Under Daniel’s guidance and leadership, Mithril Security has succeeded in developing end-to-end data protection over the entire data AI lifecycle. The path to becoming the go-to solution for data scientists handling confidential data is long and hard, but Daniel believes that the technological choices made, for accessible, fast, and secure solutions for data scientists, and the open-source approach, are already winning the heart of the AI community.

바카라사이트 바카라사이트 온라인바카라