Imagination is one of the key properties of human intelligence that enables us not only to generate creative products like art and music, but also to understand the visual world.

Mohamed Elhoseiny is an Associate Professor in Computer Science (CS) Program at the Visual Computing Center (VCC) in King Abdullah University of Science and Technology (KAUST). 

Education and early career

Dr. Elhoseiny received his Ph.D. degree from Rutgers University, New Brunswick, in October 2016 under Prof. Ahmed Elgammal. His work has been widely recognized. In 2018, he received the best paper award for his work on creative fashion generation at ECCV workshop from Tamara Berg of UNC chapel hill and sponsored by IBM Research and JD AI Research. The work got also featured at the New Scientist Magazine and he co-presented it the Facebook F8 annual conference with Camille Couprie. His earlier work on creative art generation was featured by the New Scientist magazine and MIT technology review in 2017, HBO Silicon Valley TV Series ( season 5 episode 3) in 2018. His Creative AI artwork was featured/presented at the best of AI meeting 2017 at Disney (6000+ audience), Facebook's booth at NeurIPS 2017, and the official FAIR video in June 2018. His work on life-long learning was covered at the MIT technology review in 2018. In Nov 2018 and based on his 5-year work on zero-shot learning, Dr. Elhoseiny made significant participation in the United Nations Biodiversity conference (~10,000 audience from >192 countries and tens of important organization) on how AI may benefit biodiversity which reflects in both disease management and climate change.

Areas of expertise and current scientific interests

Dr. Elhoseiny has collaborated with several researchers at Facebook AI Research including Marcus Rohrbach, Yann LeCun, Devi Parikh, Dhruv Batra, Manohar Paluri, Marc'Aurelio Ranzato, and Camille Couprie. He has also fruitfully teamed up with academic institutions including KULeuven (with Rahaf Aljundi and Tinne Tuytelaars), UC Berkeley (with Sayna Ebrahimi and Trevor Darrell), the University of Oxford (with Arslan Chaudry and Philip Torr), and the Technical University of Munich (with Shadi AlBarqouni and Nassir Navab). His primary research interests are in computer vision, the intersection between natural language and vision and computational creativity.

Honors & Awards

  • IEEE Senior Member, Area Chair at CVPR21/ICCV21/IJCAI22/ECCV22/ICLR23, Session Chair at WACV22
  •  The organizer of the first workshop on “Closing the Loop Between Vision and Language (CLVL)” in ICCV 2015 at Chile (Santiago), ICCV 2017 at Italy (Venice), ICCV 2019 at South Korea (Seoul), ICCV 2021(virtual), featured in ICCV21 daily magazine.
  • Program Chair and Co-founder of the SAAI Initiative (Art& AI Symposium and multi-stage Hackathon); https://saai.devpost.com/ in partnership with several international organizations and several hubs at Berlin, Bangalore, KAUST, San Francisco, Zurich.  
  •  I have authored/co-authored more than 50 publications at the premier Computer Vision and Machine Learning conferences and journals. 
  •  Media Attention of five different projects in MIT technology review, New Scientist, and others in 2017-2018
  • Best paper award for his work on creative fashion generation at ECCV workshop from Tamara Berg of UNC chapel hill and sponsored by IBM Research and JD AI Research in 2018.
  • Dr. Elhoseiny received the Doctoral Consortium award at CVPR 2016 and an NSF Fellowship for his Write-a-Classifier project in 2014.
  • Best Paper Award for “DesIGN: Design Inspiration from Generative Networks”at ECCV workshop on Art and Fashion, Sept, 2018
  • Nominated to represent Facebook at the United Nations Biodiversity conference by Manohar Paluri , Director of Facebook AI and Jerome Presenti, VP of AI at Facebook.
  • My Creative AI Artist demo got featured in the official Facebook AI Research (FAIR) video, June, 2018-present, see at time 1:29 here https://research.fb.com/category/facebook-ai-research/
  • Nominated to represent Facebook AI Research (FAIR) at the “Best of AI meeting” that was held at “Disney Innovation Group” to demonstrate Creative Adversarial Networks (CAN), September 2017
  • Invited for oral presentation at the main Facebook F8  conference, May, 2018, https://developers.facebook.com/videos/f8-2018/design-design-inspiration-from-generative-networks/
  • International Conference on Computer Vision and Pattern Recognition (CVPR) 2016 - Doctorial Consortium award
  • Best Research-Intern Award (silver), Vision and Learning Group, Center of Vision Technologies, SRI International, Summer 2014
  • NSF Fellowship 2014

Education Profile

  • Ph.D., Computer Science, Rutgers University, United States, 2016
  • M.Sc., Computer Science, Rutgers University, United States, 2014
  • M.Sc., Computer Systems, Ain Shams University (ASU), Egypt, 2010
  • B.Sc., Computer Systems, Ain Shams University (ASU), Egypt, 2006

Areas of Expertise and Research Interests

Imagination Inspired AI, Computer Vision, Machine Learning, Intersection between Natural Language Processing,  and Computer Vision and their applications on both understanding and generation, Sequence-to-Sequence Learning. 

Patents

  •      Adobe Research: Mohamed Elhoseiny, Scott Cohen, Wbalter Chang, Brian Price, Ahmed Elgammal, “Structured knowledge modeling, extraction and localization from images”, US Patent, 2019 
  •      SRI International: Hui Cheng, Jingen Liu, Harpreet Sawhney, Mohamed Elhoseiny, “Zero-shot event detection using semantic embedding”, US, Patent, 2021 

Education Profile

  • Ph.D., Computer Science, Rutgers University, United States, 2016
  • M.Sc., Computer Science, Rutgers University, United States, 2014
  • M.Sc., Computer Systems, Ain Shams University (ASU), Egypt, 2010
  • B.Sc., Computer Systems, Ain Shams University (ASU), Egypt, 2006

Selected Publications

Ji Zhang,Yannis Khaladis, Marcus Rohbrach, Manohar Paluri, Ahmed Elgammal, Mohamed Elhoseiny, “Large-Scale Visual Relationship Understanding”, AAAI, 2019
Ramprasaath Selvaraju, Prithvijit Chattopadhyay, Mohamed Elhoseiny, Tilak Sharma, Dhruv Batra, Devi Parikh, Stefan Lee, “Choose your Neuron: Incorporating Domain Knowledge through Neuron Importance”, ECCV, 2018
Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars, “Memory Aware Synapses: Learning what (not) to forget”, ECCV, 2018
Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Ahmed Elgammal, “Imagine it for me: Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts”, CVPR, 2018
Ahmed Elgammal, Bingchen Liu, and Diana Kim, and Mohamed Elhoseiny, and Marian Mazzone, “The Shape of Art History in the Eyes of the Machine”, AAAI, 2018 (oral)
Mohamed Elhoseiny, Francesca Babiloni, Rahaf Aljundi, Marcus Rohrbach, Tinne Tuytelaars, “Exploring the Challenges towards Lifelong Fact Learning”, ACCV, 2018
Mohamed Elhoseiny*, Yizhe Zhu*, Han Zhang, Ahmed Elgammal, “Link the head to the "peak'': Zero Shot Learning from Noisy Text descriptions at Part Precision”, International Conference on Computer Vision and Pattern Recognition, CVPR, 2017
Ji Zhang*, Mohamed Elhoseiny*, Walter Chang, Scott Cohen, Ahmed Elgammal, "Relationship Proposal Networks", International Conference on Computer Vision and Pattern Recognition (CVPR), 2017, * equal contribution
Mohamed Elhoseiny, Scott Cohen, Walter Chang, Brian Price, Ahmed Elgammal, “Sherlock: Scalable Fact Learning in Images”, AAAI Conference on Artificial Intelligence, 2017, acceptance rate 24%.
Ahmed Elgammal and Bingchen Liu, Mohamed Elhoseiny, Marian Mazzone, “Creative Adversarial Networks: Generating "Art" by Learning About Styles and Deviating from Style Norms”, International Conference on Computational Creativity, ICCC, 2017
Youssef Mohamed, Faizan Farooq Khan, Kilichbek Haydarov, Mohamed Elhoseiny,"It is Okay to Not Be Okay: Overcoming Emotional Bias in Affective Image Captioning by Contrastive Data Collection",2022
Youssef Mohamed Mohamed Abdelfattah Shyma Alhuwaider Feifan Li Xiangliang Zhang Kenneth Ward Church Mohamed Elhoseiny "ArtELingo: A Million Emotion Annotations of WikiArt with Emphasis on Diversity over Language and Culture",2022