Managing AI Risks: Key Business Considerations

Andrew Nicholson, Partner - Mullins Lawyers

Generative AI tools such as Chat GPT are already widely implemented throughout most organisations. However, the majority of businesses have not properly considered the risks involved or implemented any protocols regarding the use of AI.

A recent Deloitte survey found:

Most respondents are now frequent users of AI programmes in the workplace, with many stating AI programmes help them to:

  •     work more efficiently (63%)
  •     be more creative (54%)
  •     improve the quality of their work (45%)

However, the broad use of Generative AI poses major risks for companies with 61% of respondents stating that their company did not have any internal guidelines on the use of AI. In many of those businesses, AI implementation is not controlled by management, and there are no clear guidelines for its use.  Instead, AI is largely being implemented by employees themselves and in 26% of cases this occurs without management being aware of it.

Most respondents also confirmed that they had used AI tools for work related purposes in non-secure environments, such as on personal computers and mobile phones.

What should businesses be considering to address these risks?

Here are our top 6 tips:

1. Compliance Framework:

Businesses must remain aware of evolving regulations related to AI use within their industry and ensure use of AI applications aligns with current legal standards and compliance protocols. The legal landscape surrounding AI is dynamic, and it is imperative to establish clear policies and make all employees aware of them..

2. Data Protection and Privacy:

Businesses should be obtaining clear and informed consent from users around the collection, processing, and storage of data, particularly if sensitive personal information is being used. Updating privacy policies and, collection notices and implementing strong data privacy measures is a must.

3. Confidentiality:

The risk of disclosure of confidential business information should be addressed by establishing a framework for what can be shared with AI, implementing robust security measures and establishing clear policies.

4. Cybersecurity:

Protecting data means prioritising robust cybersecurity. Ensuring the security of AI systems and the data contained in them helps mitigate the risk of legal consequences in the event of cyber-attack or breach.

5. Transparency:

Businesses should be open and clear about their use of AI so that stakeholders understand how it might impact them, including where AI is used to produce work on their behalf, or even make decisions which could impact them. Essentially this is a full disclosure and 'truth in advertising' type test.

6. Intellectual Property Rights:

The ownership and possible protection of AI-generated content is complex. Businesses should consider how to control/own AI produced work and ensure that they clearly document (including in contracts and policies) the protection of proprietary assets.

The legal minefield of AI in the workplace can be challenging, and we are here to chat about how you are addressing the points within your own organisation.

Should you require information, please contact us: w: www.mullinslawyers.com.au | Ph: 3224 0222

Expression of Interest

Learn more about how Queensland Leaders can assist your business.

International Leaders