Enhancing Trustworthiness of AI Systems with Out-of-Distribution Detection & Generalization

David’s research focuses on verifying the resilience of an AI system to increase public trust in AI. An AI system has millions if not billions of neurons, whose behavior, relative to the given data, can be aggregated into transparent and explainable quality measures. Thereby, the AI system can be better understood, verified and optimized towards defined quality standards.

In the first 20 months of his PhD, David has completed six papers and received one best paper award for optimizing the testing of AI systems when exposed to the most challenging scenarios in which they are likely to fail. His further works include the application of model behavior understanding in cyber security and AI fairness.

The main application of his work can be found in safety-critical areas, such as healthcare, finance or autonomous driving, where small errors can cause large damage to human health or monetary assets. Being able to gain a transparent and explainable view on the AI’s resilience is essential for growing trust and thereby increasing adoption.

To further impact adoption, David has initiated and leads the AI security working group under the AI Technical Committee of Enterprise Singapore. Here, he is collaborating alongside his Professor, Liu Yang, and 17 representatives from Government and Industry to build the world-first verifiable standard for AI.

David Berend’s AI research and its impact are kept up to date on his website: http://safe-intelligence.com.

Click on the video below to view a presentation on the research project!