Please Note: Our office will be closed for the holiday season from 2pm, Friday 19th December 2025 and will reopen 8:30am, Monday 5th January 2026

(07) 4646 2621
Email Us
Level 1, 11 Annand St, Toowoomba QLD 4350
Please Note: Our office will be closed for the holiday season from 2pm, Friday 19th December 2025 and will reopen 8:30am, Monday 5th January 2026

Enterprise Legal
Enterprise Legal Enterprise Legal

AI at Work: What Every Business Needs to Know About Risks and Compliance

AI at Work: What Every Business Needs to Know About Risks and Compliance

Dec 11,2025
675+
AI at Work What Every Business Needs to Know About Risks and Compliance (1)

AI is increasingly being used by businesses to improve productivity by allowing repetitive and time-consuming tasks to be completed more efficiently.

Despite the potential for immense productivity benefits, there are also significant risks that must be managed by businesses prior to ‘diving in’ to the AI craze. In this article, Senior Associate, Michaela Jenkin advises of some practical ways to mitigate potential risks for your business.

MANAGING EMPLOYEES USE OF AI

Often, AI is being used by employees without an employer’s express permission or understanding. There is a risk that employees may inadvertently disclose a business’ confidential information by inputting business confidential information into a public open-source AI program (such as Chat GPT). This presents a significant business and reputational risk to businesses. We recommend that employers take a proactive approach to managing employees use of AI. This includes by implementing AI policies and procedures to regulate how employees use AI, the AI tools available for use and the types of information that can be inputted into an AI tool. Additionally, we recommend that employers deliver training to employees on the risks and opportunities presented by AI and how it can be used safely. We can assist with drafting policies and procedures or by delivering training.

THE IMPORTANCE OF CHECKING AI OUTPUTS

AI is not infallible. There is a risk that businesses over rely on the capabilities of AI. The reliability of AI depends upon the data inputted by the user. If there are errors in the data, then AI may amplify these errors and deliver misleading results. For example, if a user relies upon AI to draft a proposal to deliver services to a client and incorrectly describes some of the services to be provided. There is a risk that AI may amplify these errors resulting in a misleading customer proposal. If the user does not check the AI output for errors, and either publishes this proposal or sends it to a customer, then there is a risk that the business has engaged in misleading and deceptive conduct in breach of the Competition and Consumer Act 2010 (Cth). This may also be a breach of a company’s directors’ duties under the Corporations Act 2001 (Cth). We recommend that all AI outputs are thoroughly checked to ensure that businesses are not presenting any information that can be construed as misleading or deceptive.

Recently, consulting firm Deloitte admitted to the federal government that it used AI to produce a $440,000 report that contained several errors. It was reported that the report contained “hallucinations” where AI models may fill in gaps, misinterpret data, or try to guess answers[1].  Deloitte provided a partial refund to the federal government but the damage to its reputation has been immense with Deloitte’s failure to thoroughly check the AI output being reported on across the world. This highlights the importance of thoroughly reviewing any AI output to ensure any and all errors are rectified.

CYBER SECURITY

AI is both a threat and a defence when considering cyber security for your business. On the one hand, AI is accelerating the speed of cyberattacks with hackers using AI tools to create deepfake videos, phishing campaigns and websites rapidly and on a large scale. These AI tools are often able to evade a business’ current cyber security systems. This highlights the importance of ensuring that your employees undertake regular training on cyber risks to ensure employees can accurately identify cyber risks to protect your business from attack.

On the other hand, AI is being used by cyber security businesses to strengthen cyber security defence systems by automating things like routine compliance and system checks. As above, if a business can manage the risks associated with AI, the benefits can be immense.

NEED ASSISTANCE?

AI has the potential to transform business productivity when applied in a considered manner. However, the risks associated with AI are immense when used badly. At Enterprise Legal, we can assist you in navigating the complexity surrounding the use of AI in the workplace. When you book in for one of our “Free Business Health Checks”, our expert Business Law team can help ensure that your business is set up for success by having in place the correct legal frameworks. This includes the correct policies and procedures to maximise the opportunities associated with AI by managing the risks.  Please book an appointment for a free business health check [here] or contact Michaela to discuss how our team can assist your business further.

[1] https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report