Navigating the AI Explosion: Understanding Implications and Ensuring Responsible Governance

Navigating the AI Explosion: Understanding Implications and Ensuring Responsible Governance

Navigating the AI Explosion: Understanding Implications and Ensuring Responsible Governance

 

There has been an explosion in the use of AI within organisations including in governance functions. The benefits and opportunities of AI technologies are well documented, in enabling data to be rapidly synthesised and reducing time spent on repetitive tasks.  For governance functions this offers the opportunity to: 

  • Automate time-consuming tasks; 
  • Enhance decision-making through analysing large volumes of data; 
  • Provide greater coverage; 
  • Improve communication and reporting; 
  • Generate greater insights; and 
  • Embed quality assurance and compliance with standards. 

AI holds immense potential. However, commentators including technology leaders have urged caution. The implications of using AI models must be examined. Governance professionals must explore the risks including the legal and regulatory challenges to identify potential pitfalls:  

Risk  Why should you be concerned?  Responding to this risk 
Data and privacy  AI systems rely on large quantities of personal data. Unauthorised access, use, or disclosure of personal information can have legal consequences.  The issue is one of scale. AI is pulling in content which is copyrighted and contains personal data.  It is difficult to extract individual data because of the sheer volume of data being processed. 
Bias and discrimination  AI algorithms can inadvertently perpetuate biases present in training data, leading to discriminatory outcomes.   Legislation is being mobilised to fight against algorithmic discrimination.  Providers will have to carry out a prior conformity assessment before placing a high-risk AI system into use.  Internally generated AI will not be covered by this legislation. 
Intellectual property  Challenges exist associated with ownership and protection of intellectual property related to AI, including algorithms, datasets, and models.   Protecting AI inventions and avoiding infringement of others’ intellectual property rights is crucial. The emerging default position is that the customer will own output (where it doesn’t violate the rights of another individual). 
Liability and accountability  Determining liability for AI-related accidents or harm can be complex. Questions may arise regarding who is responsible: the developer, operator, or the AI itself.   There is a need to establishing accountability under the existing legal frameworks. Risk is generally assumed to have passed to the customer, so no liability can be placed on the provider in respect of its outputs. 
Reputation  AI systems may produce unintended or harmful outcomes that damage an organisation’s reputation. 

 

 

Employers must establish clear policies on the use of AI within the workplace and communicate these to all employees. They must be aware of the risks and potential consequences and have appropriate policies in place. 
Risk of error or misrepresentation  AI systems may produce incorrect, unexpected, or unintended outcomes resulting from poor source data, inappropriate logic, or assumptions.  Use diverse and reliable data sources, ensuring data integrity, and addressing any biases or inconsistencies in the training data. Consider what level of assurance is appropriate over the data and the AI system. 
Security AI systems can be vulnerable to cyberattacks, data breaches, unauthorised access, and potential manipulation of AI algorithms.  

 

Implementing strong cybersecurity practices to protect AI systems from unauthorized access, data breaches, and malicious attacks is crucial. Regular security assessments, encryption, access controls, and ongoing monitoring can help mitigate cybersecurity risks. 
Employee attrition and employment law  AI and automation may impact employment and raise questions about job displacement, worker rights, and the legal obligations of employers in adapting to these changes.  Conduct impact assessments and analysing employment law implications before implementing AI technologies. Engage in dialogue and consultation with workers and their representatives to address concerns and promote fair practices. 

 

Ethical considerations  AI can pose threats over the transparency, authenticity, and fairness of messaging with the potential for systematic harm. 

 

Establish clear ethical guidelines and principles for the development, deployment, and use of AI systems. Regularly review and update policies and practices to align with evolving legal requirements and ethical standards. 

 

AI is a hugely valuable resource. Emotional intelligence and the interaction of humans with the data is also essential. Humans behave in a way that is not always rational. AI cannot replace this so human intervention will remain critical. 

We need collaborative efforts between regulators, policymakers, industry experts, and legal and governance professionals to establish robust frameworks, appropriate regulations and guidelines that protect individuals’ rights, encourage innovation, and ensure AI’s safe and responsible integration into society. 

At BRAVE we believe it is our responsibility as governance advisors, risk facilitators and assurance providers to highlight these issues and ensure the right safeguards are in place. 

Carrie Stephenson/Carolyn Clarke

September 2023 

AUTHORS.

CARRIE STEPHENSON AND CAROLYN CLARKE

Share on
Related Posts