Whilst most are initially impressed at AI technology such as ChatGPT’s efficiency in accomplishing tasks that would take regular people several hours to complete, upon further investigation, many have realised the possibility of AI-generated work containing several factual errors. ChatGPT is vulnerable to ‘hallucinations’ where it fabricates facts and sources at a rate cited to be between 15% – 21%. When using ChatGPT to try to locate research material or generate an answer on a particular topic, ChatGPT is known to provide sources with confidence that they are accurate and reliable, when they are not. Even when asked if the source is real, ChatGPT will not indicate that it has fabricated information and assures you the source is credible – when in fact, the source is non-existent.
Recently, a Victorian mayor, Brian Hood, is contemplating legal action against ChatGPT for falsely publishing that he was convicted of foreign bribery offences. If Mr Hood proceeds, this is likely to be the world’s first defamation suit against ChatGPT for its content.
How does this affect you?
Automated technology within the workplace is becoming common practise with the use of technology such as ChatGPT providing people with improved productivity and efficiency, enhanced communication and access to a broad range of expertise. It is likely that with the increased use of AI technology in the workplace, cases such as Mr Hood’s may become more prevalent. Regardless of where your organisation is at with exploring the many possibilities of AI technology, there are critical questions that need to be considered prior to doing so to ensure its safe and legal use. So, how can you and your organisation avoid unintentionally defaming someone by using such technology?
What are defamatory statements?
In order to pursue an action for defamation under the Defamation Act 2005 (Qld), the Court will take into consideration both the “publication” of the defamation, and the “imputations” it carries.
A publication is something which is communicated to a third party by the publisher. The High Court decision of Fairfax Media Publications Pty Ltd v Voller confirms that generally, entities which provide the infrastructure for defamatory comments can be held liable as publishers.
A publication can amount to defamation if the publication carried imputations which would cause a reasonable person to:
- think less of the aggrieved person;
- ridicule the aggrieved person; or
- otherwise avoid the aggrieved person.
How to mitigate the risk of defaming someone using AI
If your organisation is generating content, especially for social media use, with the assistance of AI, it should be noted that whilst technology such as ChatGPT can be a useful starting point, just like any work produced by a human, the work produced by an AI needs to be carefully scrutinised before it is publicly released. This will guarantee that your organisation only produces factually accurate and credible information by ensuring your work is free from errors, especially ones that could amount to defamation.
Disclaimer: This publication is intended for general and informative use only and is not to be relied upon as professional financial or legal advice.
Contact our team today for a free quote by clicking the link here or calling us on 1300 277 699.