Abstract
The rapid growth of Artificial Intelligence (AI) and Deep Learning mirrors an infectious phenomenon. While AI systems promise diverse applications and benefits, they bear substantial security and privacy risks. Indeed, AI represents a goldmine for the security and privacy research domain.
This talk focuses on Federated Learning (FL), an innovative approach to enable collaborative Deep Neural Network training among distributed entities without sharing raw data. However, externalizing the training process exposes the system to malicious attacks, particularly poisoning attacks. Despite a significant body of research on FL attacks and defenses, poisoning attacks persist due to the limitations of current defense strategies.
We provide a systematic overview of FL attacks and defenses, including our contributions, highlighting their strengths and weaknesses, and explore ongoing challenges in fortifying collaborative learning frameworks.
We conclude by extending the discussion to the potential deployment of Federated Learning for detecting AI-generated text, where we present our recent work on effectively identifying ChatGPT-generated, substantially advancing the state-of-the-art.
Brief Biography
Prof. Ahmad-Reza Sadeghi is a distinguished Full Professor of Computer Science at the Technical University of Darmstadt, Germany, where he also leads the System Security Lab. His academic journey includes earning a Ph.D. in Computer Science with a specialization in Cryptography from the University of Saarland, Germany. Before his academic career, Prof. Sadeghi contributed significantly to the Research and Development sector of the Telecommunications industry, notably with Ericsson.
Since 2012, Prof. Sadeghi has fostered a lasting collaboration with Intel, participating in several Collaborative Research Centers addressing diverse topics such as Secure Computing in Mobile and Embedded Systems, Autonomous and Resilient Systems, and Private AI. In 2019, he expanded his lab by establishing the Open Lab for Sustainable Security and Safety (OpenS3 Lab) in partnership with Huawei.
His research focus spans various domains, including Trustworthy Computing Platforms, Hardware-assisted Security, IoT Security and Privacy, Applied Cryptography, and Trustworthy AI. Prof. Sadeghi has played pivotal roles in numerous national and international research and development projects, emphasizing the design and implementation of secure and trustworthy technologies.
He has served as General or Program Chair and Program Committee member of major Information Security and Privacy and Design and Automation conferences and events. Prof. Sadeghi has notably contributed as the Editor-In-Chief of IEEE Security and Privacy Magazine and served on editorial boards for respected publications such as ACM TISSEC, IEEE TCAD, ACM Books, ACM DIOT, ACM TODAES, and ACM DTRAP.
Prof. Sadeghi's exceptional contributions to the field have earned him prestigious awards. In 2008, he was awarded the esteemed German "Karl Heinz Beckurts" prize for his influential research in Trusted and Trustworthy Computing technology, acknowledging its impactful transfer to industrial practice. In 2010, his group received the German IT Security Competition Award. In 2018, he was honored with the ACM SIGSAC Outstanding Contributions Award, recognizing his dedicated research, education, and management leadership in the security community, with pioneering contributions in content protection, mobile security, and hardware-assisted security. He is a member of the German National Academy of Science and Engineering,
The year 2021 brought further recognition with the Intel Academic Leadership Award at USENIX Security, acknowledging Prof. Sadeghi's influential research in information and computer security, particularly in hardware-assisted security. In 2022, he was awarded the prestigious European Research Council (ERC) Advanced Grant, solidifying his position as a leading figure in advancing cutting-edge research in computer science and security.