Join us on April 11 as we continue our conversations on "trust in AI". An important aspect of AI adoption is the level of trust and perceived competency of the system by the human. Enabling valid trust in AI requires ensuring that systems are designed in a way that promotes users’ trust while providing them with the support that they need as we look to integrate decision-support AI into intelligence, mission planning, and JADC2 applications. The team behind this I/ITSEC Best Paper nominee in 2023 presents the design of a novel system intersecting human factors, cognitive modeling, and recommendation AI to explore approaches for collaborative human-AI teaming. They conducted a series of usability and system design evaluations that explored (a) information that users consider when making trust judgments, (b) unobtrusive behavioral measures that integrate into cognitive models to predict when trust falls, and (c) trust calibrations when cognitive model predictions did not match user actions— providing the AI an opportunity to build trust by intervening at the right time in the right way. User behavior, impressions, and self-report responses were examined to understand what user behaviors emerge when users perceive a tool to be working collaboratively. Specific guidance on designing recommendation AI that can leverage behaviors and cognitive modeling for naturalistic interaction as well as system calibration techniques to improve a user’s perception system competency are discussed.
Duration
1 hour
Price
Free
Language
English
OPEN TO
Everyone
Dial-in Number
Please register for this Webinar to view the dial-in info.
NTSA represents the modeling, simulation, and training industries. Through its forums and events, NTSA brings government, academia, and industry together to develop new training and simulation solutions.