Special Offer: Get 50% off your first 2 months when you do one of the following
Personalized offer codes will be given in each session

Debugging LLMs: Best Practices for Better Prompts and Data Quality

About This Webinar

Large language models (LLMs), have raised concerns about incorrectly generated outputs and the potential impact on spreading disinformation and yielding poor outcomes in critical domains like healthcare and financial services.

In this panel conversation, participants from BuzzFeed and Galileo will discuss best practices for data science teams to effectively debug, manage and control results from LLMs that were not anticipated.

They will explore the importance of implementing better controls around fairness and bias, and share their insights on best practices to ensure the safe and ethical usage of LLMs at scale.

Furthermore, the panel will delve into the pressing need for faster and more robust LLM fine-tuning and prompt evaluations.

Join this discussion to gain valuable insights and practical recommendations on how to improve the quality of your prompts, data integrity, and fairness while building your LLM powered apps with your data.

Who can view: Everyone
Webinar Price: Free
Featured Presenters
Webinar hosting presenter
Senior Director of Machine Learning at BuzzFeed
Archi is the Head of ML at BuzzFeed and leads cross-functional ML teams driving Gen AI and Personalization efforts across all BuzzFeed brands. He combines with deep tech experience in Search, RecSys, Computer Vision & MLOps + ability built consensus and rally large disparate groups of people towards delivering compelling internet scale user products. Prior to BuzzFeed, he led Search & Recs and Computer Vision teams at Wayfair and researched Ethical AI systems at Northeastern University CCIS lab. While not at musing about neural network architectures, he likes to wonder about cosmology, astronomy and space.
Webinar hosting presenter
Co-Founder at Galileo
Atindriyo is a Co-Founder and CTO at Galileo. Prior to that, he has spent 10+ years building large scale ML platforms at Uber and Apple. Formerly, he was a Staff Software Engineer and Tech Lead on Uber's Michelangelo ML platform and a co-architect of Michelangelo's Feature Store.

His work scaled the Uber's Feature Store to serve 20000+ ML Features across all of Uber Machine Learning. He led ML Data Quality efforts for Uber. The solutions and tooling his team built improved the production performance of over 10000 models powering Uber's ML.

Later on, his work with the Stanford AI Lab conceptualized Embedding Stores - a Feature Platform for managing and serving time sensitive entity embeddings to downstream ML models.
Hosted By
Data Science Salon webinar platform hosts Debugging LLMs: Best Practices for Better Prompts and Data Quality
The DATA SCIENCE SALON is a unique vertical-focused data science conference that grew into a diverse community of senior data science, machine learning, and other technical specialists. We gather face-to-face and virtually to educate each other, illuminate best practices and innovate new solutions in a casual atmosphere.
Attended (187)
Recommended