With the rapid development of AI technology, many tasks are becoming automated making our lives easier. Scepticism is a common response to such new advancements as it is a human tendency to question new concepts or technologies since historical times (Popkin, 2017).
This article will explore how we are sometimes not only sceptic but also averse to unfamiliar technologies in the realm of AI. Algorithm aversion, or aversion to artificial intelligence, is when people prefer human forecasters over AI. This preference persists, despite AI being proven superior in many areas. (Dzindolet et al., 2003).
Our aversion can be a product of different factors such as the algorithm’s agency over the final decision, the importance of the decision itself, the algorithm’s performance compared to a human agent or even the amount of information we receive about the technology.
While our preference towards human responses is understandable. It can be possible that artificial intelligence is superior at certain tasks. This makes it crucial to learn about our inherent perceptions. The aim is to learn about our bias against artificial intelligence and its roots and to further learn how such aversion can be overcome.
Read More: AI deep fake technology can harm the Mental Health of Teenagers
Role of Agency in AI Aversion
The autonomy given to an artificial algorithm plays a huge role in our aversion to it. There are two types of technologies when it comes to research on algorithm aversion, performative and advisory algorithms. The performative algorithms perform a certain task on behalf of humans possessing greater autonomy over the outcome. On the other hand, advisory algorithms simply offer a suggestion leaving the outcome up to the user giving the user more autonomy.
It is often found that performative algorithms lead to more aversion due to the lesser agency of the human using the algorithm (Ekaterina Jussupow et al., 2020). For instance, one study explores the delegation of responsibility of decision-making on algorithms. The researchers explored how managerial decisions in the workplace are delegated to AI and they discovered that when humans showed perceived situational awareness (SA) of the task at hand and how to conduct it, they were less reliant on AI even if it was known to be more proficient (Schneider & Leyer, 2019). Other than managerial decisions, AI’s capabilities transcend to other workplace tastes such as resume screening.
Read More: Addressing the Impact of AI Worries in the Workplace
In another study researching resume screening, it was found that people are generally more trustworthy of human agents conducting resume screening instead of artificial intelligence algorithms (Lacroux & Martin-Lacroux, 2022). Apart from administration, AI was also utilised in medical practice where the stakes of decisions are even higher. Older research had found that it was commonly believed that if physicians used AI’s help in diagnosis, they were believed to be less capable.
While recent research has not supported this claim, it was found that the use of AI is not used to deem a physician incapable but instead, because people perceive the use of AI to be impersonal and raise issues against how the examination was not thorough (Shaffer et al., 2012).
The cumulation of results from these studies proves that when there are different factors like our perceived situational awareness or higher stakes in terms of recruitment of employees and medical decisions, we can feel averse to AI due to the amount of agency it has on impactful decisions.
Algorithm Performance
While aversion can be highly dependent on autonomy, the performance of the algorithm compared to a human’s can play a role in our decision to employ it or not. It was found that in the field of healthcare–similar to Shaffer et al.–patients were less likely to rely on AI for their healthcare because of different reasons supported by a feeling of uniqueness neglect.
Read More: How Emotions Play an Important Role in Decision-Making
Uniqueness neglect refers to our perception of being unique and having varying characteristics from others. It was commonly believed by the participants in this study that AI treats everyone as the same with a lack of consideration of individual attributes. Due to this unique neglect, people also wanted to pay the AI providers lesser than their human counterparts even when performing the same (Longoni et al., 2019).
This perception of uniqueness neglect committed on the AI’s part is our general justification for being averse to it. Another factor apart from the uniqueness of guilt was the trust we place in AI in terms of performance. It was often found in research that the way humans perceive other humans is different from AI when trusting their perspective.
When more trust is placed in the human advisor’s expertise and experience, the AI’s credibility is often diminished and thus we are averse to using it (Madhavan & Wiegmann, 2007). Another study on patients in New Zealand using a screening tool for mental health and physiological risk factors concluded that patients tended to be more harsh on discarding AI tools when they were proven unreliable compared to human agents (Goodyear-Smith et al., 2004).
Read More: Artificial Intelligence and Alzheimer’s Disease Early Detection: Study
It is important to be aware of our own biases when evaluating an AI’s performance. We often seek unique treatment yet the inequalities in healthcare only decrease the reliability of such special administration. Our trust in AI’s credibility and bias towards human authority even when proven wrong can be harmful in the long run as AI technology becomes more widespread.
How to Overcome AI Aversion?
Understanding AI’s Similarity to Humans:
There has been significant literature on ways to decrease our aversion to AI. For instance, one study explored how joke recommendations can be perceived as less valid when provided by AI compared to humans. The research found that people often perceived humans and AI to choose recommendations in different processes and provide content in different ways.
Building Trust Dynamics:
Explaining to participants that the AI’s process is far similar to humans led to greater reliance on AI (Yeomans et al., 2019). Similarly, in another study, it was discovered that understanding why humans may choose to trust or not trust an AI algorithm’s results can further help us avoid aversion to them (Dzindolet et al., 2003).
Leveraging Automation for Productivity:
Since automation can increase our productivity by taking over simpler tasks, it is essential to also learn to trust it when proven to be reliable. Other studies also supported the notion that by bridging the gap of understanding how humans and AI work in cohesion, AI aversion can be overcome (Madhavan & Wiegmann, 2007).
Bridging Understanding Between Humans and AI:
Our reliance on AI can primarily be increased by explaining why potential errors may happen with AI tools, bridging the gap of trust between AI and human agents and overcoming our bias in perceiving recommendations by AI and humans differently.
Addressing Bias and Building Respect for AI:
While integrating AI into our lifestyle it is increasingly essential to be aware of our biases and respect AI’s role in our lives. The application of AI is diverse in many different fields from human resources to medical sciences. It is always crucial to consider using such tools as workplaces are transformed at an astronomically high rate.
Take Away
Being a sceptic is an understandable emotional response to such a fast-paced world, however, working on cohesion with AI can immensely support us in our careers through small tasks or even our personal lives with joke recommendations. Remembering that AI is simply an extension of our knowledge and a product of human innovation can greatly increase our acceptance of the new technology.
Read More: How psychology plays an important role in the future of technology?
References +
- Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/s1071-5819(03)00038-7
- Ekaterina Jussupow, Izak Benbasat, & Heinzl, A. (2020). Why are we averse towards Algorithms? A comprehensive literature review on Algorithm aversion.
- Goodyear-Smith, F., Arroll, B., Sullivan, S., Elley, R., Docherty, B., & Janes, R. (2004). Lifestyle screening: development of an acceptable multi-item general practice tool. The New Zealand Medical Journal, 117(1205), U1146. https://pubmed.ncbi.nlm.nih.gov/15570330/
- Lacroux, A., & Martin-Lacroux, C. (2022). Should I Trust the Artificial Intelligence to Recruit? Recruiters’ Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.895997
- Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
- Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277–301. https://doi.org/10.1080/14639220500337708
- Popkin, R. H. (2017). Skepticism. In Encyclopædia Britannica. https://www.britannica.com/topic/skepticism
- Schneider, S., & Leyer, M. (2019). Me or information technology? Adoption of artificial intelligence in the delegation of personal strategic decisions. Managerial and Decision Economics, 40(3), 223–231. https://doi.org/10.1002/mde.2982
- Shaffer, V. A., Probst, C. A., Merkle, E. C., Arkes, H. R., & Medow, M. A. (2012). Why Do Patients Derogate Physicians Who Use a Computer-Based Diagnostic Support System? Medical Decision Making, 33(1), 108–118. https://doi.org/10.1177/0272989×12453501
- Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making. https://doi.org/10.1002/bdm.2118
Leave feedback about this