Empower Your Voice: Make an Impact Today!
In an age where the potential of AI is set to reshape societies worldwide, it is essential to steer its evolution and application through ethical frameworks and inclusive viewpoints. Global perspectives are instrumental in molding our digital landscape. With this objective in mind, we initiated a study in March aimed at gathering insights directly from people, which will aid in formulating a report that outlines tangible, action-oriented strategies for the responsible development and deployment of AI.
Over a period of two months, this extensive research explored societal attitudes towards AI, revealing critical insights. Gaining an understanding of how different communities view, engage with, and foresee AI is vital; decisions pertaining to its future must take into account the varied contexts of our global community.
Our key findings:
Methodology
Our anonymous quantitative research gathered input from over 325 participants spanning across 64 countries. Participants were primarily enlisted via the AI for Good mailing list and through social media outreach. The mailing list comprises a diverse array of stakeholders, including government officials, industry experts, UN agencies, non-profit organizations, international bodies, and scholars. Being part of the AI for Good community, most participants possess some level of interest or expertise in AI, with social media efforts amplifying the reach of the initiative.
Out of the respondents, 39% were women, accurately representing global distributions within a 5-10% margin.
Research Findings
Prior Awareness of AI Technologies
A majority are informed about AI.
Over 75% of respondents recognized all 10 categories of AI applications provided, while roughly 90% acknowledged familiarity with at least five. Participants often mentioned large language models (LLMs) for text and video creation, AI focused on climate issues, and personal assistants as well-known applications. This indicates a strong awareness of AI technologies among participants. A separate global study covering 28 countries revealed that about 67% of individuals claimed to possess a solid understanding of AI (Myers, 2022), further validating the widespread recognition of AI.
Confidence in Using AI
AI confidence is growing.
At least seven out of ten participants reported feeling confident in their ability to utilize AI, indicating that it is generally perceived as an accessible and user-friendly tool. Confidence plays a significant role in decision-making; studies indicate that self-assurance can affect whether individuals accept or disregard AI outputs (Chong et al., 2022). Users tend to attribute errors to their own limitations, continuing to trust imperfect AI systems despite their flaws. Poor performance from AI can result in a decline in both self-confidence and trust in the technology, which can be swiftly lost but is slow to recover.
When an individual rejects an AI output and succeeds, their confidence increases while their trust in AI diminishes. Conversely, if they accept an AI output and do not perform well, they might blame themselves for not noticing AI’s inaccuracies, thereby eroding their confidence in both their abilities and the technology. Future research should delve into the relationship between confidence, capability, and experience to better understand the sources of reported confidence.
Accountability for Ethical and Responsible AI Use
Responsibility for AI accountability should be collective.
A considerable portion of respondents believe that accountability for the ethical and responsible utilization of AI should primarily reside with companies and governments. However, many also assert that individuals, researchers, international organizations, and other stakeholders should share this responsibility. A recurring theme emerged that all parties involved in the AI lifecycle should partake in accountability, with some emphasizing the role of users in responsible usage.
The private sector, at the forefront of AI innovation, bears the onus of safeguarding human rights (Lane, 2022). International human rights legislation could provide a framework for establishing standards for emerging technologies, enabling a balance of responsibilities among stakeholders. As technological advancements frequently outpace legal frameworks, there is a pressing need for new governance models. Key challenges such as transparency, human oversight, and data management must be tackled (Taeihagh, 2021). Collaborative hybrid models may foster public agreement while aligning global governance strategies (Taeihagh, 2021).
Ranking Benefits of AI
Enhanced productivity is a primary advantage of AI.
The productivity benefits of AI are broadly recognized, with over 75% of respondents acknowledging its capability to manage monotonous and time-intensive tasks. Furthermore, more than 50% recognized its role in ensuring safety in hazardous situations, such as during fires, deep-sea explorations, and Mars missions. However, there is less consensus on AI’s influence on innovation, accuracy, and social equity. Regardless of differing opinions, 60% of participants in a different global study agreed that “products and services using artificial intelligence make my life easier” (Myers, 2022).
Ranking Risks of AI
AI’s main concerns are bias and ethical deficiencies.
More than half of the participants (over 50%) concurred that bias and discrimination present formidable risks linked to AI. Approximately 50% also identified concerns regarding the absence of ethics, morals, emotions, and empathy within AI systems. The development of AI requires substantial time, talent, and resources, which can perpetuate inequalities and widen the digital divide. Furthermore, AI is perceived to heighten unemployment rates and replace human jobs, while automation risks discouraging critical thinking. Moreover, AI’s inherent lack of ethics, morals, emotions, and empathetic understanding hampers its ability to adequately meet human needs.
Future of Society and AI Development
The comfort level with AI is more positive than negative.
Most respondents indicated a greater comfort than discomfort regarding AI advancements, averaging a comfort score of 6.0 out of 10. Nevertheless, additional research is needed to pinpoint the origins of this comfort, including whether it is influenced by specific tools, platforms, individuals, or processes. In another global study across 28 countries, 40% of respondents expressed nervousness about “products and services using artificial intelligence” (Myers, 2022).
Factors That Could Increase Comfort with Emerging AI Technologies
Establishing regulations can enhance trust.
Implementing clear guidelines and regulations was identified as a key method for increasing comfort levels with emerging AI technologies. Transparency and harmonious human-machine interaction were also highlighted as essential. Respondents advocated for open-source AI frameworks and systems that empower users instead of overpowering them, which might involve human interpretation or advancing towards hybrid systems. International collaboration and multilateral agreements were underscored as vital for promoting ethical AI development. Participants emphasized the need for foresight by anticipating negative consequences and future applications of AI.
Meaning of AI to Participants
Perspectives on AI are diverse, illustrating both peril and promise.
Participants underscored the duality of AI, regarding it as both a potential opportunity and a pressing threat. This highlights the necessity for effective regulation and governance. They identified productivity as a significant benefit while recognizing considerable potential in addressing complex human challenges in innovative ways. Yet, there is an awareness of the urgency to comprehend the rapidly evolving landscape of AI. Participants stressed the imperative to address improper use and the missed opportunities associated with AI technologies.
One participant remarked: “The relentless enhancement of computers to function more like humans and exceed our capabilities when beneficial, explainable, and responsible.”
Image created through key points raised by participants.
Perspectives on the Rate
Insights into AI Development
The consensus is that AI is advancing too rapidly.
A significant portion of respondents (53%) feel that the pace of AI development is too swift, with very few indicating that it is moving too slowly. This perception likely stems from concerns regarding the rapid technological progress of AI and its implications.
Views on AI Explainability
Explainable AI is essential.
Almost 76% of those surveyed believe that the outputs and suggestions generated by AI should be elucidated, signaling a strong demand for increased transparency. Currently, many AI systems fall short in providing explanations for their suggestions, highlighting a crucial area for improvement in explainability.
Views on AI Governance
Regulations ought to be specific to applications.
A majority (58%) advocate for AI regulations that are customized to particular application sectors. It’s essential for comprehensive legislation to recognize the unique challenges inherent to different fields, thereby offering tailored solutions.
Perspectives on AI and Job Displacement
Upskilling the workforce is imperative.
The majority (70%) assert that training and reskilling initiatives are crucial for adapting to the shifting labor landscape influenced by AI. Historical patterns of technological innovation have profoundly affected employment and the economy, underscoring the necessity of preparing for workforce transitions proactively.
Views on AI’s Impact on Humanity
Many believe AI is enhancing our intelligence.
A slight majority (51%) think that AI improves human cognitive abilities and capacities. Though this positive viewpoint prevails, further studies are essential to ensure that AI realizes its promise while effectively addressing emerging challenges.
Key Findings
This study examined global perceptions regarding AI development, use, and governance. The results illustrate a broad awareness of AI technologies, with most participants showing confidence in their implementation. Yet, the agreement on shared responsibility signifies a strong need for collaborative frameworks that encompass companies, governments, and individuals to ensure ethical and responsible AI usage.
Productivity was highlighted as a notable advantage; however, concerns about biases, discrimination, and the lack of ethics and empathy in AI systems persist. Despite these issues, a majority retain an optimistic outlook, with an average comfort level of 6.0 out of 10. This optimism is balanced by demands for clear regulations and guidelines to promote transparency, explainability, and harmony between humans and machines.
Respondents also concur that AI governance should be application-specific, allowing legislation to adapt to the distinct challenges posed by various sectors. The accelerating pace of AI advancements has raised apprehensions regarding its effects on employment, emphasizing the critical role of upskilling programs in readying the workforce for evolving job conditions.
This research further underscores that while participants acknowledge AI’s potential to enhance productivity and augment human cognition, effective governance is vital for addressing associated risks. Striking a balance between innovation and supervision will be crucial as we work towards a fair and ethical digital future for AI.