Lexin Zhou
I am a CS Alumnus at the University of Cambridge, funded by Open Philanthropy, and supervised by Prof. Andreas Vlachos. Prior to that, I graduated my BSc in Data Science at the Universitat Politècnica de València, advised by Prof. Jose Hernandez-Orallo.
I mostly spend my day thinking about (i) designing robust evaluation methods that offer explanatory and predictive power of AI’s capabilities, limitations and risks, and (ii) finding pathways to positively shape the reliability and predictability of AI systems. I am also broadly interested in AI’s social implications, psychometrics, cognitive sciences, and AI safety; I am especially intrigued by general-purpose systems like LLMs.
Across distinct timelines, I’ve spent time in research/consultancy roles on AI Evaluation at Meta AI, OpenAI, Krueger AI Safety Lab, VRAIN, and European Commission JRC, among others.
If you are drawn to everything relevant to AI Evaluation and wanna stay informed, I highly recommend the monthly AI Evaluation Digest, led by a few amazing colleagues I’ve worked with, to which I also make occasional contributions.
news
Jun 20, 2024 | An LLM Feature-based Framework for Dialogue Constructiveness Assessment preprint at Arxiv. |
---|---|
Mar 01, 2024 | Participated in the Red Team at Meta AI for their new foundation models, focusing on adversarial testing. |
Oct 09, 2023 | Predictable Artificial Intelligence preprint at Arxiv. |
Mar 14, 2023 | Participated in the Red Team of GPT-4, focusing on capability assessment, reliability evaluation, and adversarial testing. |