About me
Senior research scientist @ Google Deepmind in Zurich.
Welcome to my page! I am a senior research scientist at Google Deepmind, focusing on deep learning and ethical AI. Previously, I founded and led the Advanced Analytics team at Saudi Aramco, managed the company’s Enterprise Analytics program, and was a technical lead at its digital transformation program. Check out my resume for further details.
Education
- Ph.D. in Computer Science at KAUST (2012 - 2017), GPA: 4.0 / 4.0.
- Thesis Title: Learning via Query Synthesis, Committee: X. Zhang, X. Gao, D. Keyes (KAUST), and W. Wang (UCLA).
- M.S. in Electrical Engineering at Stanford University (2009 - 2011), GPA: 4.15 / 4.0.
- B.S. in Computer Engineering at University of Nebraska, Lincoln (2000 - 2005), GPA: 3.98 / 4.0.
- Highest Distinction, Superior Scholarship Award, Minor in Economics.
Selected Activities
Books
Ibrahim Alabdulmohsin, Summability Calculus: A Comprehensive Theory of Fractional Finite Sums, Springer, 2018.
Recent Preprints
- Xi Chen, Xiao Wang, Lucas Beyer, Alexander Kolesnikov, Jialin Wu, Paul Voigtlaender, Basil Mustafa, Sebastian Goodman, Ibrahim Alabdulmohsin, Piotr Padlewski, et al:
“PaLI-3 Vision Language Models: Smaller, Faster, Stronger.” ArXiv: abs/2310.09199 (2023)
Recent Publications
- Ibrahim Alabdulmohsin, Vinh Q. Tran, and Mostafa Dehghani: “Fractal Patterns May Illuminate the Success of Next-Token Prediction.” NeurIPS, 2024.
- Bo Wan, Michael Tschannen, Yongqin Xian, Filip Pavetic, Ibrahim Alabdulmohsin, Xiao Wang, André Susano Pinto, Andreas Steiner, Lucas Beyer, Xiaohua Zhai: “LocCa: Visual Pretraining with Location-aware Captioners” NeurIPS, 2024.
- Angéline Pouget, Lucas Beyer, Emanuele Bugliarello, Xiao Wang, Andreas Peter Steiner, Xiaohua Zhai, Ibrahim Alabdulmohsin: “No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models” NeurIPS, 2024.
- Ibrahim Alabdulmohsin, Xiao Wang, Andreas Peter Steiner, Priya Goyal, Alexander D’Amour, Xiaohua Zhai: “CLIP the Bias: How Useful is Balancing Data in Multimodal Learning?” ICLR, 2024.
- Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, A. J. Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, et al:
“PaLI-X: On Scaling up a Multilingual Vision and Language Model.” CVPR, 2024.
- Ibrahim Alabdulmohsin, Xiaohua Zhai, Alexander Kolesnikov, Lucas Beyer:
“Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design.” NeurIPS, 2023.
- Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek, Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert Geirhos, Ibrahim Alabdulmohsin, Avital Oliver, Piotr Padlewski, Alexey A. Gritsenko, Mario Lucic, Neil Houlsby: “Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution.”
NeurIPS, 2023.
- Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, et al:
“Scaling Vision Transformers to 22 Billion Parameters.” ICML, 2023.
- Lucas Beyer, Pavel Izmailov, Alexander Kolesnikov, Mathilde Caron, Simon Kornblith, Xiaohua Zhai, Matthias Minderer, Michael Tschannen, Ibrahim Alabdulmohsin, Filip Pavetic: “FlexiViT: One Model for All Patch Sizes,” CVPR, 2023.
- Ibrahim Alabdulmohsin, Nicole Chiou, Alexander D’Amour, Arthur Gretton, Sanmi Koyejo, Matt J. Kusner, Stephen R. Pfohl, Olawale Salaudeen, Jessica Schrouff, Katherine Tsai: “Adapting to Latent Subgroup Shifts via Concepts and Proxies,” AISTATS, 2023.
- Ibrahim Alabdulmohsin, Behnam Neyshabur, Xiaohua Zhai: “Revisiting Neural Scaling Laws in Language and Vision,” NeurIPS, 2022.
- Ibrahim Alabdulmohsin, Jessica Schrouff, Oluwasanmi Koyejo: “A Reduction to Binary Approach for Debiasing Multiclass Datasets,” NeurIPS, 2022.
- Jessica Schrouff, Natalie Harris, Oluwasanmi Koyejo, Ibrahim Alabdulmohsin, Eva Schnider, Krista Opsahl-Ong, Alexander Brown, Subhrajit Roy, Diana Mincu, Christina Chen, Awa Dieng, Yuan Liu, Vivek Natarajan, Alan Karthikesalingam, Katherine A. Heller, Silvia Chiappa, Alexander D’Amour: “Maintaining fairness across distribution shift: do we have viable solutions for real-world applications?”, NeurIPS, 2022.
- Alexander Soen, Ibrahim Alabdulmohsin, Sanmi Koyejo, Yishay Mansour, Nyalleng Moorosi, Richard Nock, Ke Sun, Lexing Xie:
“Fair Wrapping for Black-box Predictions,” NeurIPS, 2022.