Professor Elhoseiny’s research focuses on developing affective artificial intelligence that understands and generates novel visual content. He has contributed to and led numerous seminal works of affective AI art creation.

Biography

Mohamed Elhoseiny is an associate professor in the Computer Science Program at KAUST and the principal investigator of the KAUST Vision-CAIR Research Group. He joined the CEMSE Division at KAUST in 2019, bringing extensive experience from roles including a visiting faculty position at Baidu Research and a postdoctoral research stint at Facebook AI Research from 2016 to 2019. He also held research positions at Adobe Research from 2015 to 2016 and at SRI International in 2014.

Elhoseiny earned his Ph.D. in 2016 from Rutgers University, Canada, and his B.Sc. and M.Sc. in Computer Systems from Ain Shams University, Egypt, in 2006 and 2010, respectively.

His work has received numerous recognition, including the Best Paper Award at the 2018 European Conference on Computer Vision (ECCV) Workshop on Fashion, Art, and Design for his research "DesIGN: Design Inspiration from Generative Networks." He also received the Doctoral Consortium Award at the 2016 Conference on Computer Vision and Pattern Recognition (CVPR) and an NSF Fellowship for his "Write-a-Classifier Project" in 2014. His research on creative art generation has been featured in New Scientist Magazine and MIT Technology Review, which also highlighted his work on lifelong learning.

Professor Elhoseiny’s contributions extend to zero-shot learning, which was featured at the United Nations, and his creative AI work was highlighted in HBO’s Silicon Valley. He has served as an area chair at CVPR 2021 and the International Conference on Computer Vision (ICCV) 2021, and has organized workshops at ICCV in 2015, 2017, and 2019, and at CVPR in 2021.

He has been involved in several pioneering works in affective AI art creation and has authored or co-authored numerous award-winning papers.

Research Interests

Elhoseiny’s primary research interests are in computer vision—the intersection between natural language and vision and computational creativity—particularly efficient multimodal learning with limited data and vision and language. He is also interested in affective AI, especially understanding and generating novel visual content, such as art and fashion.

Awards and Distinctions

  • IEEE Senior Member, 2024
  • Program Chair and Co-founder of the SAAI Initiative (Art& AI Symposium and multi-stage Hackathon), 2024
  • Best paper award for his work on creative fashion generation at ECCV workshop, Tamara Berg of UNC chapel hill and sponsored by IBM Research and JD AI Research, 2018
  • Doctoral Consortium award, CVPR, 2016
  • NSF Fellowship for Write-a-Classifier project, 2014
  • Best Paper Award, ECCV workshop on Art and Fashion, 2018
  • Nominated to represent Facebook at the United Nations Biodiversity conference by Manohar Paluri , Director of Facebook AI and Jerome Presenti, VP of AI at Facebook, 2024
  • Doctorial Consortium award, International Conference on Computer Vision and Pattern Recognition (CVPR), 2016
  • Best Research-Intern Award (silver), Vision and Learning Group, Center of Vision Technologies, SRI International, 2014
  • NSF Fellowship, 2014
  • Nominated to represent Facebook AI Research (FAIR) at the “Best of AI meeting” that was held at “Disney Innovation Group” to demonstrate Creative Adversarial Networks (CAN), 2017

Education

Doctor of Philosophy (Ph.D.)
Computer Science, Rutgers University, United States, 2016
Master of Science (M.S.)
Computer Science, Rutgers University, United States, 2014
Master of Science (M.S.)
Computer Systems, Ain Shams University, Egypt, 2010
Bachelor of Science (B.S.)
Computer Systems, Ain Shams University, Egypt, 2006

Patents

  • Adobe Research: Mohamed Elhoseiny, Scott Cohen, Wbalter Chang, Brian Price, Ahmed Elgammal, “Structured knowledge modeling, extraction and localization from images”, US Patent, 2019.
  • SRI International: Hui Cheng, Jingen Liu, Harpreet Sawhney, Mohamed Elhoseiny, “Zero-shot event detection using semantic embedding”, US, Patent, 2021.