How better programming makes better AI - The verifiable learning paradigm
Abstract. DevSecOps teaches us that security should be integrated into every stage of the software development lifecycle, not just bolted on at the end. Yet in AI security, most research still focuses on analyzing trained models that were never designed to be verified efficiently. As a matter of fact, large models often cannot be analyzed at all using existing verification tools, due to scalability issues. In this talk, I explore how smarter training algorithms can produce AI models that are born verifiable. This shift, known as verifiable learning, embeds security considerations into the model from the start, making post-hoc security analysis tractable instead of intractable. I’ll illustrate this approach with AI models based on decision trees, showing how security and efficiency can go hand in hand—by design.
Bio. Stefano Calzavara is an Associate Professor of Computer Science at Università Ca’ Foscari Venezia, Italy. His research lies at the intersection of formal methods and software security, with a strong focus on their applications in web security and AI security. He has authored over 60 publications in these areas, which have appeared in top-tier international venues such as IEEE S&P, ACM CCS, NDSS, USENIX Security, The Web Conference (WWW), ACM TOPLAS, and ACM TWEB. He is pleased to regularly serve in the PCs of flagship computer security conferences and is member of the editorial boards of top journals in the field.
Abstract. DevSecOps teaches us that security should be integrated into every stage of the software development lifecycle, not just bolted on at the end. Yet in AI security, most research still focuses on analyzing trained models that were never designed to be verified efficiently. As a matter of fact, large models often cannot be analyzed at all using existing verification tools, due to scalability issues. In this talk, I explore how smarter training algorithms can produce AI models that are born verifiable. This shift, known as verifiable learning, embeds security considerations into the model from the start, making post-hoc security analysis tractable instead of intractable. I’ll illustrate this approach with AI models based on decision trees, showing how security and efficiency can go hand in hand—by design.
Bio. Stefano Calzavara is an Associate Professor of Computer Science at Università Ca’ Foscari Venezia, Italy. His research lies at the intersection of formal methods and software security, with a strong focus on their applications in web security and AI security. He has authored over 60 publications in these areas, which have appeared in top-tier international venues such as IEEE S&P, ACM CCS, NDSS, USENIX Security, The Web Conference (WWW), ACM TOPLAS, and ACM TWEB. He is pleased to regularly serve in the PCs of flagship computer security conferences and is member of the editorial boards of top journals in the field.