Excessive trust in generative AI makes users vulnerable, survey finds

KnowBe4 AFRICA's recent survey uncovered that a significant number of individuals are willing to share personal information with generative AI tools like ChatGPT. Spanning 1,300 respondents across ten African and Middle Eastern countries, the survey disclosed that 63% of users are comfortable sharing personal information, and an impressive 83% express confidence in the accuracy and reliability of these AI tools.
Anna Collard, SVP Content Strategy and Evangelist at KnowBe4 AFRICA, emphasized the widespread use of generative AI tools and underscored the need for increased user training and awareness about potential risks associated with this powerful technology. While generative AI tools, like ChatGPT, gained popularity for their efficiency in various applications, the survey highlights concerns about hasty adoption.
Collard acknowledged the tremendous opportunities that generative AI offers for users and organizations but cautioned against overlooking associated risks. The survey, conducted across users in ten African countries, revealed that all respondents use generative AI in their personal and professional lives, with many incorporating it into their daily or weekly routines. Primary purposes for using generative AI include research, information gathering, email composition, creative content generation, and document drafting.
Despite concerns about job losses due to AI, 80% of respondents did not feel threatened in terms of job security. However, 57% believed that generative AI has the potential to replace human creativity. The survey also unveiled a disconnect between perceptions and reality, especially in cybersecurity.
One notable finding is the ease with which users share personal information with generative AI tools, with 83% expressing confidence in their accuracy and reliability. The comfort level varied across countries, highlighting differences in attitudes toward data privacy. Collard stressed the importance of encouraging critical thinking and addressing psychological biases that lead to excessive trust in synthetically generated content.
The survey flagged a lack of comprehensive policies in organizations to tackle challenges associated with generative AI, with 46% of respondents reporting the absence of a generative AI policy at work. Collard emphasized the need for organizations to establish policies that promote responsible and safe usage of generative AI.
Deepfakes, identified as one of the most concerning uses of AI, were highlighted in a previous survey by KnowBe4 AFRICA. Collard urged a zero-trust mindset to combat the malicious use of generative AI, emphasizing the necessity for companies to provide training and implement comprehensive policies to navigate this evolving technology.