As featured in the latest edition of QHA Update
AI tools such as Chat GPT are widely used by management and staff in many organisations.
The use of those tools come with risks which pose governance issues and should be properly considered by Boards and Management before the technology is implemented.
A recent Deloitte survey found that 61% of respondents did not have any internal guidelines on the use of AI. In many of those organisations, AI implementation is not controlled by management, and there are no clear guidelines for its use. Instead, AI is largely implemented by employees themselves and in 26% of cases this occurs without management being aware of it.
Most respondents also confirmed that they had used AI tools for work related purposes and had input confidential information in non-secure environments, such as on personal computers and mobile phones.
Failure to address those risks could leave the organisation (and the Board/Management) exposed to claims.
WHAT SHOULD ORGANISATIONS DO TO ADDRESS THE RISK?
Here are our top 6 tips:
- Compliance Framework: Organisations should create clear policies in relation to the use of AI in the workplace and ensure that staff are complying with them. Seeking help to create those policies is advisable as they should address current regulations related to AI use and ensure that policies align with current legal standards. Because the legal landscape surrounding AI is dynamic, it is imperative to establish clear policies, make all employees aware of them and review them regularly.
- Data Protection and Privacy: Clear and informed consent should be obtained from users regarding the collection, processing, and storage of data, particularly if sensitive personal information is being used. Updating privacy policies and, collection notices and implementing strong data privacy measures should go hand-in-hand with an AI policy.
- Confidentiality:The risk of disclosure of confidential business information should be addressed by clearly identifying what can (and cannot) be shared with AI, implementing robust security measures and establishing clear policies.
- Cybersecurity: Protecting data means prioritising robust cybersecurity. Ensuring the security of AI systems and the data contained in them helps mitigate the risk of legal consequences in the event of cyber-attack or breach.
- Transparency: Organisations should be open and clear about their use of AI so that stakeholders understand how it might impact them, including where AI is used to produce work on their behalf, or even make decisions which could impact them. Essentially, this is a full disclosure and ‘truth in advertising’ type test.
- Intellectual Property Rights: The ownership and possible protection of AI-generated content is complex. Organisations should consider who will own work produced with the assistance of AI (if permitted) and ensure that it is clearly documented.
The legal minefield of AI in the workplace can be challenging, and we are available to chat about how you are addressing those matters within your organisation. Please don’t hesitate to contact me on 07 3224 0261 if I can assist you with this.