Large language models (LLMs), have raised concerns about incorrectly generated outputs and the potential impact on spreading disinformation and yielding poor outcomes in critical domains like healthcare and financial services.
In this panel conversation, participants from BuzzFeed and Galileo will discuss best practices for data science teams to effectively debug, manage and control results from LLMs that were not anticipated.
They will explore the importance of implementing better controls around fairness and bias, and share their insights on best practices to ensure the safe and ethical usage of LLMs at scale.
Furthermore, the panel will delve into the pressing need for faster and more robust LLM fine-tuning and prompt evaluations.
Join this discussion to gain valuable insights and practical recommendations on how to improve the quality of your prompts, data integrity, and fairness while building your LLM powered apps with your data.