Hasan is an Electrical and Computer Engineering MS/Ph.D. student in Image and Video Understanding Lab (IVUL) Group in Visual Computing Center (VCC) at King Abdullah University of Science and Technology (KAUST) under the supervision of Professor Bernard Ghanem.

Education and Early Career

Hasan obtained his bachelor's degree in Electrical and Computer Engineering from American University of Beirut in 2020. He joined KAUST in 2020 to pursue his MS and PhD degrees.

Research Interest

  • Deep Learning
  • Machine Learning
  • Computer Vision
     

Publications

My Papers

Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?

Authors: Hasan Abed Al Kader Hammoud*, Ameya Prabhu*, Ser-Nam Lim, Philip HS Torr, Adel Bibi, Bernard Ghanem
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples. However, we show that this metric is unreliable, as even vacuous blind classifiers, which do not use input images for prediction, can achieve unrealistically high online accuracy by exploiting spurious label correlations in the data stream. Our study reveals that existing OCL algorithms can also achieve high online accuracy, but perform poorly in retaining useful information, suggesting that they unintentionally learn spurious label correlations. To address this issue, we propose a novel metric for measuring adaptation based on the accuracy on the near-future samples, where spurious correlations are removed. We benchmark existing OCL approaches using our proposed metric on large-scale datasets under various computational budgets and find that better generalization can be achieved by retaining and reusing past seen information. We believe that our proposed metric can aid in the development of truly adaptive OCL methods. We provide code to reproduce our results at https://github.com/drimpossible/EvalOCL.
CAMEL: Communicative Agents for" Mind" Exploration of Large Scale Language Model Society

Authors: Guohao Li*, Hasan Abed Al Kader Hammoud*, Hani Itani*, Dmitrii Khizbullin, Bernard Ghanem
The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their "cognitive" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.
Computationally Budgeted Continual Learning: What Does Matter?

Conference: CVPR2023
Authors: Ameya Prabhu*, Hasan Abed Al Kader Hammoud*, Puneet K. Dokania, Philip H.S. Torr, Ser-Nam Lim, Bernard Ghanem, Adel Bibi
Continual Learning (CL) aims to sequentially train models on streams of incoming data that vary in distribution by preserving previous knowledge while adapting to new data. Current CL literature focuses on restricted access to previously seen data, while imposing no constraints on the computational budget for training. This is unreasonable for applications in-the-wild, where systems are primarily constrained by computational and time budgets, not storage. We revisit this problem with a large-scale benchmark and analyze the performance of traditional CL approaches in a compute-constrained setting, where effective memory samples used in training can be implicitly restricted as a consequence of limited computation. We conduct experiments evaluating various CL sampling strategies, distillation losses, and partial fine-tuning on two large-scale datasets, namely ImageNet2K and Continual Google Landmarks V2 in data incremental, class incremental, and time incremental settings. Through extensive experiments amounting to a total of over 1500 GPU-hours, we find that, under compute-constrained setting, traditional CL approaches, with no exception, fail to outperform a simple minimal baseline that samples uniformly from memory. Our conclusions are consistent in a different number of stream time steps, e.g., 20 to 200, and under several computational budgets. This suggests that most existing CL methods are particularly too computationally expensive for realistic budgeted deployment. Code for this project is available at: https://github.com/drimpossible/BudgetCL.

Under Construction! More to be added.

 

Education Profile

B.E., Electrical and Computer Engineering, American University of Beirut, Lebanon,2020

Remove

Bldg 1, Level 2, 2106-WS01

Awards and Distinctions

•    Distinguished Graduate Award for Electrical and Computer Engineering (2020)
•    Mohamad Ali Safieddine Endowed Award for Academic Excellence (2020)
•    Ranked first on Electrical Engineering Department among the graduating class with a GPA of 4 /4, American University of Beirut 
•    Dean's honor list award for four consecutive years, American University of Beirut
•    Blom Shabeb Scholarship (2016)
•    Top 10 Lebanese Students in General Science Official Exams (2016)