Four years after the call for chief AI ethics officers, a significant majority of executives see AI ethics as crucial for their AI strategies. However, outside major tech companies, the role has not gained traction. Many enterprises prefer discussing 'responsible AI' instead of 'ethics' due to the complexities involved in ethical interpretations across cultures. Companies are exploring governance approaches, whether through dedicated roles or team efforts, mainly focusing on managing AI risks and accountability. The evolving responsibilities are driven by regulations and industry insights.
Ethics can connote a certain morality, a certain set of norms, and multinational companies are often dealing with many different cultures. Ethics can become a fraught term even within the same country where you have polarised views on what is right, what is fair.
Some companies are creating a role for an AI governance lead; others are rightfully looking at it as a team effort, a shared responsibility across everyone who touches the AI value chain.
Organisations want a person or a team in charge of managing AI risks, making sure employees and vendors are held accountable for AI solutions they're buying, using or building.
The role is steeped in the latest regulations, the latest insight, the latest trends - they're going to industry discussions, keeping their finger on the pulse of the developments globally.
Collection
[
|
...
]