A study by three Technion researchers has revealed that AI systems’ competence isn’t enough: For users to choose a system, it needs to have warmth.
Spotify or Apple Music? Waze or Google Maps? Amazon Alexa or Google Nest? Consumers choose daily which AI systems to employ. How do they choose which artificial intelligence (AI) based systems to use? Considering the amount of money and efforts spent on AI performance enhancement, one might expect competence and capability to drive users’ choices. However, a recent study conducted by researchers from the Faculty of Industrial Engineering and Management at the Technion shows that the systems’ warmth plays a pivotal role in predicting consumers’ choice between AI systems.
New research with over 1,600 participants, recently published in the Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, offers some insight into the psychology of potential users. The researchers, Zohar Gilad, Ofra Amir, and Liat Levontin from the Faculty of Industrial Engineering and Management at the Technion – Israel Institute of Technology, examined the effects of users’ perception of AI systems’ warmth, that is, the systems’ perceived intent (good or ill), and AI systems’ competence, that is, the systems’ perceived ability to act on those intentions, on their choices.
While most of the research regarding warmth perceptions of AI-based systems addressed systems with a virtual or physical presence such as virtual agents and robots, the current research focused on AI systems with little or no social presence, such as recommender systems, search engines, and navigation apps. For these types of AI systems, the researchers defined warmth as the primary beneficiary of the system. For example, a navigation system can prioritize collecting data about new routes over presenting the best-known route or vice versa.
The researchers found that the system’s warmth was important to potential users, even more than its competence, and they preferred a highly warm system over a highly competent system. This precedence of warmth persisted even when the highly warm system was overtly deficient in its competence. For example, when asked to choose between two AI systems that recommend car insurance plans, most participants favored a system with low-competence (that uses a traditional decision tree algorithm that was trained on data from 1,000 car insurance plans) and high-warmth (that was developed to help people like them receive better car insurance offers and achieve their goals), over a system with high-competence (that uses a state-of-the-art artificial neural network algorithm that was trained on data from 1,000,000 car insurance plans) and low-warmth (that was developed to help insurance agents make better car insurance offers and achieve their goals). That is, consumers were willing to sacrifice competence for higher warmth
These findings are consistent with social rules used in human-human interactions, as warmth considerations are also more important than competence considerations when judging fellow humans. In other words, people use similar basic social rules to evaluate AI systems and people, even when assessing AI systems without overt human characteristics. AI system designers should thus consider and communicate the system’s warmth to its potential users.