Robust AI in the Wild

Dragos D. Margineantu, Ph.D.  (Boeing)

 

Abstract:

The ultimate goal of research in artificial intelligence is to provide the tools, techniques and methods for building usable systems that make decisions or assist in making complex decisions, that model and “understand” the world, gather relevant knowledge, and act responsibly. In many cases, the practical tasks that are addressed require decisions and actions associated with high risks and low density data, and a certain assurance or robustness is required from the system that employs AI components.

Central questions to AI systems research include: What are the principles that we learned from designing and developing systems so far, that we could apply to new domains? What are the ultimate questions that we need to answer in order to build the required runtime robustness for our systems?

In this presentation we will first discuss in detail the structure of modern AI systems and optimizing for constraints that are not fully specified.

Next, we will address the problem of acting without a complete model of the real world.

Finally, we will analyze the challenges of building robust human-automation systems that are safe.



This site uses cookies that enable us to make improvements, provide relevant content, and for analytics purposes. For more details, see our Cookie Policy. By clicking Accept, you consent to our use of cookies.