How ethical is today's use of generative AI? While AI has brought many benefits to various companies, people still wonder if it is crossing the line regarding ethical principles. Although many companies are exploring or utilizing Generative AI, 56% of respondents are uncertain whether their organizations have established ethical guidelines for its implementation, revealed a survey by Deloitte. The survey also found that 74% of respondents indicate that their companies have initiated trials of Generative AI, with 65% already integrating it into their internal operations.
The "State of Ethics and Trust in Technology" report's primary objective was to examine the adoption of ethical standards within organizations about emerging technologies. Similarly, Conversica conducted a survey to provide more insights on the preparedness of businesses regarding AI ethics, showing that 86% recognize the importance of establishing explicit guidelines for the responsible utilization of AI technology.
"There is an inherent opportunity to apply emerging technologies for societal good while creating financial value for the enterprise. However, the adoption of Generative AI is outpacing the development of ethical principles around the use of the technology, intensifying the potential risks to society and corporate trust if these standards continue to lag," said Kwasi Mitchell, Chief Purpose and DEI Officer at Deloitte.
Data privacy is the main concern
According to the survey, data privacy emerged as the primary ethical worry regarding GenAI, with 22% of respondents highlighting it as their top concern. However, it's worth noting that the percentage of participants who considered data privacy a crucial ethical principle in their organizations decreased from 19% in the previous year's survey to 7% in the current survey.
The view that cognitive technologies, including Generative AI, can bring about positive societal benefits has grown, with 39% of respondents in this year's survey believing they have the greatest potential for good among all emerging technologies. This is an increase from 33% in the previous year. However, the perception of their potential for causing ethical concerns has risen even more swiftly. In the current survey, 57% of participants identified cognitive technologies as the most likely to pose significant ethical risks, in contrast to 41% in 2022.
Are organizations ready for automation?
In response to the introduction of automation, organizations are choosing to adapt by keeping, retraining, and enhancing the skills of their employees. About 73% of survey participants reported that their organizations are adjusting the tasks of some workers due to the implementation of new technologies. Among them, 85% retain employees whose roles are impacted, and more than two-thirds (67%) provide retraining or upskilling opportunities to help these employees transition into new roles.
When asked to identify the top ethical concerns associated with the broader use of Generative AI in business, only 7% of respondents expressed worries about job displacement caused by Generative AI replacing human jobs.
In the context of increased attention to emerging technologies due to Generative AI, only 27% of survey respondents stated that their companies are engaged in collaborations with commercial entities. Likewise, only 23% reported partnering with government organizations to address potential ethical concerns, similar to the 22% reported last year.
Government's role in tech ethics
Interestingly, there is a shift in sentiment regarding government participation in setting ethical standards. This year, 71% of respondents believe that the government should play a larger role in establishing these standards, compared to 61% in the previous year.
Survey participants expressed strong support for government involvement in technology regulation. They are particularly in favor of the government facilitating collaboration between businesses to establish standards (69%), creating regulations (59%), offering incentives for the adoption of these standards (50%), and implementing financial penalties for non-compliance (37%).