ChatGPT and other generative AI tools have made exciting breakthroughs in human-like fluency, but they suffer from a key weakness: they can't always be trusted.
This talk will explain the problems that large language models (LLMs) have with reliability in factual accuracy, logical thinking and computational tasks. It will also explain Wolfram's vision for addressing these issues through the combination of LLMs and the computational intelligence provided by Wolfram Language.
The talk will show examples of using plugins to inject facts and provide private data and algorithms to LLM answers as well as how to exploit LLM capabilities within Wolfram Language code to solve difficult problems with unstructured data and poorly understood tasks.