Welcome to The Next AI Platform
SCHEDULE FOR SELECT VIRTUAL SESSIONS. JUMP TO MINUTE MARKS IN SCHEDULE
:22 – 22:00 “Why Reconfigurability Will Be Key for Inference at Scale” with Ramine Roane, Xilinx
22:02 – 51:03 - “Building Vast, Scalable Conversational AI Platforms: Pain Points in Software, Hardware” with Chandra Khatri, Uber
51:05 – 1:09:50 - “Architectural Considerations for Next Generation Deep Learning” with Bill Dally, Stanford/Nvidia
1:10:00 – 1:44:00 – VC Panel: AI Market Perspectives (Khosla Ventures, Intel Capital, Applied Ventures, hosted by Karl Freund, Moor Insights and Strategy).
1:44 - 2:14 - “Refining Machine Reasoning and the Hardware/Systems Impact” with Jason Gauci, Facebook.
2:15 – 2:33 – “AI Hardware in Context: Cerebras at Lawrence Livermore National Lab” with Brian Spears (LLNL) and Andy Hock (Cerebras).
2:33 – 2:52 - “Scalability, Efficiency, and Architectural Balance: A Chat with SambaNova” with Rodrigo Liang, CEO/co-founder, SambaNova.
2:52 – 3:07 –- “Using AI to Design AI Chips: EDA Next Area for Deep Learning Growth” with Elias Fallon, Cadence.
3:07 – 3:25 “High Performance, Low Power: The FPGA Way to Consider Datacenter Inference” with Raymond Njissen, Achronix
3:25 – 3:46 “Performance Benchmarking and Device Evaluation” with Peter Mattson, Google; Maxim Naumov, Facebook, and Debo Dutta, Cisco (hosted by David Kanter)
3:46 – 4:10 -“Architectural Differentiation from the Ground Up” with SiMa.Ai founder, CEO, Krishna Rangasayee