Introduction
Hello! I’m LuisMi! I am physicist, passionate about technology and science. I believe that everybody has the responsibility to make a positive impact on the world . In my case, I firmly believe that helping advance scientific development and technology is one of the most effective ways I can contribute.
For years I’ve been a technology enthusiast, especially artificial intelligence. I’ve been following the AI revolution, since Transformers were introduced, and I’ve closely watched the work of frontier model labs like OpenAI and Anthropic. Anthropic’s mission to build AI systems that people can trust and that are safe, honest, and interpretable, has always resonated deeply with me.
My Professional Background
I’ve worked as a Data Scientist on Finance, Sales and Analytics teams, collaborating closely with Product teams. Currently, I work as a Data Scientist at Mercado Libre, specifically in Fraud Prevention, where I develop and continuously monitor models focused on reducing fraud in Mercado Libre’s Credit products across all of Latin America.
However, during the time I’ve been working, I’ve had moments where I’ve felt that my work hasn’t impacted the world positively in the way I truly wanted. Sometimes I’ve felt that I’m not doing what I should be doing in life, that I could be contributing in a more significant way.
Reflection
Recently I came across a video by Robert Miles . Thanks to this video, I delved into the world of AI Safety Research, and, I also discovered 80,000 Hours , which led me to discover the Effective Altruism movement. This has changed how I think about maximizing my positive impact on the world.
Thanks to this video, 80,000 Hours, Anthropic, and what I feel should be my life’s purpose , I’ve decided to change careers and begin this journey towards AI Safety Research in Interpretability and Alignment.
My Research Focus
I’m especially interested in investigating how we can use interpretability to improve Alignment techniques and evaluations in order to align them more effectively with human values.
My goal is that in around two years, I will have successfully changed my professional career and become an AI Researcher in Interpretability and Alignment at an AI frontier lab (e.g., Anthropic) or startup focused on AI Safety.
What This Portfolio Represents
This portfolio will be the place where I document my plan, my progress, projects, and literature during my path towards AI Research. Any person, from any background, can closely follow my journey, and I hope to motivate more people to reflect and begin searching for and doing what they’re passionate about, ideally helping others in the process.
In my next post, I’ll detail more about the specific plan for becoming an AI Safety Researcher.
Transparency Note
When you see this icon in posts or on images throughout this portfolio, it indicates that artificial intelligence was used to generate or edit the content. I believe in being transparent about AI assistance in content creation, as it aligns with the values of honesty and transparency that I admire from Anthropic.
Get In Touch
You can contact me through my profiles or email at Me .
This marks the beginning of my journey from physicist and data scientist to AI Safety researcher. Follow along as I navigate this transition, sharing both challenges and discoveries along the way.