Yinxi Liu, PhD Student, Computer Science and Engineering, the Chinese University of Hong Kong
Tuesday, February 27, 2024, 09:00
- 10:00
Building 9, Level 4, Room 4225
Contact Person
Living in a computer-reliant era, we’re balancing the power of computer systems with the challenges of ensuring their functional correctness and security. Program analysis has proven successful in addressing these issues by predicting the behavior of a system when executed.
Soufiane Hayou, Postdoc, Simons Institute, UC Berkeley
Monday, February 26, 2024, 09:00
- 10:00
Building 9, Level 4, Room 4225
Neural networks have achieved impressive performance in many applications such as image and speech recognition and generation. State-of-the-art performance is usually achieved via a series of engineered modifications to existing neural architectures and their training procedures. However, a common feature of these systems is their large-scale nature: modern neural networks usually contain Billions - if not 10's of Billions - of trainable parameters, and empirical evaluations (generally) support the claim that increasing the scale of neural networks (e.g. width and depth) boosts the model performance if done correctly. However, given a neural network model, it is not straightforward to address the crucial question `how do we scale the network?'. In this talk, I will show how we can leverage different mathematical results to efficiently scale neural networks, with empirically confirmed benefits.
Jason Avramidis, Director of Innovation and International Flexibility Markets for OakTree Power, UK
Tuesday, February 13, 2024, 12:00
- 13:00
Building 1,Level 4, Room 4214
Contact Person
Until very recently, distribution-led local flexibility markets were exclusively an academic endeavor, with few practical applications, mostly limited to small-scale innovation projects. However, with European regulation finally catching up with the realities of modern distribution networks, local flexibility markets are slowly becoming a reality - new ones popping up across the continent, or some even becoming a BAU option in the most advanced countries.
Marco Mellia, Department of Control and Computer Engineering, Politecnico di Torino, Italy
Sunday, February 11, 2024, 12:00
- 13:00
Building 9, Level 2, Room 2325, Lecture Hall 2
This Dean's Distinguished Lecture is part of the ECE Graduate Seminar. Modern Artificial Intelligence (AI) technologies, led by deep learning, have gained unprecedented momentum over the past decade.
Prof. Dr. Victorita Dolean, Mathematics and Computer Science, Scientific Computing, TU Eindhoven
Tuesday, February 06, 2024, 16:00
- 17:00
Building 2, Level 5, Room 5220
Wave propagation and scattering problems are of huge importance in many applications in science and engineering - e.g., in seismic and medical imaging and more generally in acoustics and electromagnetics.
Gene Tsudik, Distinguished Professor of Computer Science, the University of California, Irvine (UCI)
Monday, February 05, 2024, 11:30
- 12:30
Building 9, Level 2, Room 2325, Hall 2
Contact Person
As many types of IoT devices worm their way into numerous settings and many aspects of our daily lives, awareness of their presence and functionality becomes a source of major concern. Hidden IoT devices can snoop (via sensing) on nearby unsuspecting users, and impact the environment where unaware users are present, via actuation.
Prof. Samuel Horvath, Machine Learning at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Sunday, February 04, 2024, 15:00
- 16:00
Buliding 4, Level 5, Room 5220
Contact Person
In the first part of the talk, we introduce Ordered Dropout, a mechanism that achieves an ordered, nested representation of knowledge in deep neural networks (DNNs).
Monday, January 29, 2024, 11:30
- 12:30
Building 9, Level 2, Room 2325
Contact Person
In reaction to the waning benefit of transistor scaling and the increasing demands on computing power, specialized accelerators have drawn significant attention from both academics and industry because of their orders-of-magnitude performance improvement and energy efficiency.
Prof. Zhiming Chen, Academy of mathematics and Systems Science, Chinese Academy of Sciences
Wednesday, January 24, 2024, 14:30
- 16:00
Building 4, Level 5, Room 5220
In this short course, we will introduce some elements in deriving the hp a posteriori error estimate for a high-order unfitted finite element method for elliptic interface problems. The key ingredient is an hp domain inverse estimate, which allows us to prove a sharp lower bound of the hp a posteriori error estimator.
Monday, January 22, 2024, 11:30
- 12:30
Building 9, Level 2, Hall 2325
Contact Person
We study theoretical problems of fault diagnosis in circuits and switching networks, which are among the most fundamental models for computing Boolean functions.
Prof. Mohamed Abdelfattah, Electrical and Computer Engineering Department at Cornell University
Sunday, December 17, 2023, 14:00
- 15:30
Building 2, Level 5, Room 5209
Contact Person
Deep neural networks (DNNs) are revolutionizing computing, necessitating an integrated approach across the computing stack to optimize efficiency. In this talk, I will explore the frontier of DNN optimization, spanning algorithms, software, and hardware. We'll start with hardware-aware neural architecture search, demonstrating how tailoring DNN architectures to specific hardware can drastically enhance performance.
Prof. Ahmad-Reza Sadeghi, Distinguished Professor of Computer Science, the Technical University of Darmstadt, Germany.
Sunday, December 10, 2023, 12:00
- 13:00
Building 4, Level 5, Room 5220
Contact Person
The rapid growth of Artificial Intelligence (AI) and Deep Learning mirrors an infectious phenomenon. While AI systems promise diverse applications and benefits, they bear substantial security and privacy risks. Indeed, AI represents a goldmine for the security and privacy research domain.
RC3 Advisory Board
Tuesday, December 05, 2023, 08:30
- 12:30
Building 5, Level 5, Room 5220
Contact Person
Machine learning (ML) has witnessed remarkable advancements in recent years, demonstrating its effectiveness in a wide array of applications, including intrusion detection systems (IDS). However, when operating in adversarial environments, ML-based systems are susceptible to a range of attacks.
Prof. Marcus Völp, Head of the CritiX lab, the Interdisciplinary Centre for Security, Reliability and Trust (SnT), the University of Luxembourg.
Thursday, November 30, 2023, 15:30
- 16:30
Building 5, Level 5, Room 5209
Contact Person
Our society keeps entrusting ICT systems with high value cyber-only assets, such as our most sensitive data, finances, etc. However, when it comes to cyber-physical systems and their ability to act in and with the physical world, lifes are at risk and require rigorous protection against accidental faults and cyberattacks.
Monday, November 27, 2023, 11:30
- 12:30
Building 9, Level 2, Room 2325, Hall 2
Contact Person
We develop a derivative-free global minimization algorithm that is based on a gradient flow of a relaxed functional. We combine relaxation ideas, Monte Carlo methods, and resampling techniques with advanced error estimates. Compared with well-established algorithms, the proposed algorithm has a high success rate in a broad class of functions, including convex, non-convex, and non-smooth functions, while keeping the number of evaluations of the objective function small.
Nuno Neves, Professor at the Department of Computer Science, Faculty of Sciences, the University of Lisboa (FCUL), Portugal.
Thursday, November 23, 2023, 15:30
- 16:30
Building 5, Level 5, Room 5209
Contact Person
Federated Learning (FL) is a distributed machine learning approach that allows multiple parties to train a model collaboratively without sharing sensitive data.
Monday, November 20, 2023, 11:30
- 12:30
Building 9, Level 2, Room 2325, Hall 2
Contact Person
Currently, attention mechanism becomes a standard fixture in most state-of-the-art NLP, Vision and GNN models, not only due to outstanding performance it could gain, but also due to plausible innate explanation for the behaviors of neural architectures it provides, which is notoriously difficult to analyze. However, recent studies show that attention is unstable against randomness and perturbations during training or testing, such as random seeds and slight perturbation of input or embedding vectors, which impedes it from becoming a faithful explanation tool. Thus, a natural question is whether we can find some substitute of the current attention which is more stable and could keep the most important characteristics on explanation and prediction of attention.
Tuesday, November 14, 2023, 12:15
- 14:15
Building 1, Level 3, Room 3426
Contact Person
The development of advanced vision-language models necessitates considerable resources, both in terms of computation and data. There is growing interest in training these models efficiently and effectively and leveraging them for various downstream tasks. This dissertation presents several contributions aimed at improving both learning and data efficiency in vision-language learning, and how to leverage them into downstream tasks.
Adrian Perrig, Professor, the Department of Computer Science, ETH Zürich, Switzerland
Monday, November 13, 2023, 11:30
- 12:30
Building 9, Level 2, Room 2325, Hall 2
Contact Person
Imagining a new Internet architecture enables us to explore new networking concepts without the constraints imposed by the current Infrastructure. In this presentation, we invite you to join us on our 14-year-long expedition of creating the SCION next-generation secure Internet architecture.
Sunday, November 12, 2023, 15:00
- 16:30
B1, L4, R4214
Contact Person
Sequential modeling algorithms have made significant strides in a variety of domains, facilitating intelligent decision-making and planning in complex scenarios. This dissertation explores the potential and limitations of these algorithms, unveiling novel approaches to enhance their performance across diverse fields, from autonomous driving and trajectory forecasting to reinforcement learning and vision language understanding.
Josep Domingo-Ferrer, Distinguished Professor, Computer Science and an ICREA-Acadèmia, Research Professor, Universitat Rovira i Virgili, Tarragona, Catalonia.
Thursday, November 09, 2023, 15:30
- 16:30
Building 4, Level 5, Room 5209
Contact Person
Machine learning (ML) is vulnerable to security and privacy attacks. Whereas security attacks aim at preventing model convergence or forcing convergence to wrong models, privacy attacks attempt to disclose the data used to train the model.
Tuesday, November 07, 2023, 15:00
- 17:00
B3, L5, R5220
Contact Person
Graph Representation Learning has gained substantial attention in recent years within the field of data mining. This interest has been driven by the prevalence of data organized as graphs, such as social networks and academic graphs, which encompass various types of nodes and edges-forming heterogeneous graphs.
Prof. Muhammad Abdul-Mageed
Monday, November 06, 2023, 11:30
- 12:30
Building 9, Level 2, Room 2325, Hall 2
Contact Person
In the evolving landscape of artificial intelligence, generative models are revolutionizing our interface with computational systems and reshaping societal paradigms. For example, foundation models have the potential to transform content creation across languages, offering discovery and productivity pathways for humans to engage with one another and their environment. This talk sketches the core methodologies propelling this groundbreaking progress, charting a grand vision for generative natural language processing.