ChatGPT Creates Perils for Plagiarism and other Litigation

As technology increases in sophistication, so do the dangers of cyber-attacks and privacy regarding your information. One of the biggest concerns in recent weeks is the new AI program ChatGPT. These programs are raising concerns that they might be worse than hackers regarding to your data and your reputation in that companies are voluntarily providing confidential information to unsecure software.

ChatGPT is the latest in AI computing work similar to AI art. Instead of putting information to create art out of any topic or brand, ChatGPT allows someone to put in a topic and conditions. The program then creates an article with the expected outlines automatically. Once it’s finished, the article is ready for review.

There are several issues relating to this that many are beginning to find out. One of the biggest that has popped up recently is that these ChatGPT written works do not consider whether they are cited copyrighted or trademarked. In almost everything related to research or news, sources must be cited to prevent plagiarism and more importantly if said information is real. Not even lawyers are free from this. In a recent incident an attorney was forced to make an apology for his argument in a lawsuit with an airline. When questioned, one attorney admitted he used research another had researched using ChatGPT. The second lawyer’s research revealed that none of the seven cases cited in the argument, a case against an airline, were real, despite asking the service whether or not it was real.(1) This scandal has led to the two lawyers getting put on leave and face a hearing on whether or not they would be disbarred from practicing.

Perhaps the biggest worry is that GPT is as vulnerable to hacking as other tech devices. In March 2023, a security issue led to many users to be able to see what others were chatting about. Although a quick patch was implemented, some countries were already calling for a ban on the practice in Europe because they feel the company couldn’t guarantee the privacy needed to comply with the GDPR regulations.(2)

Other issues that step from ChatGPT include:

  1. Vulgar language: ChatGPT and OpenAI learn from the internet, using material that was around the Internet in 2021. To the chagrin of everyone, the Internet doesn’t have a profanity filter by default. It will pick up stuff from chatrooms and other places that are considered Not Safe for Work and will use them in their dissertations if allowed.
  2. False information: ChatGPT will pull information from anywhere. But as evidenced by the case noted above, they are just as prone to misinformation as any human website.
  3. Bias: ChatGPT pulls their information from many sources. Unfortunately, the biases in some reports are also there which can lead to misinformation.

According to Mike Smith, President of Axis Insurance Services, LLC “some of our publicly traded clients are already establishing policies prohibiting the use of ChatGPT and other forms of open platform AI due to security concerns. Companies using such software can disclose to the AI program confidential information about businesses, transactions, employees that are prohibited by current state laws or can be detrimental to the company if disclosed. Given the lack of standardization of controls over these platforms, we see significant risk for our clients>’

ChatGPT is considered by many to be the next evolution in AI generated work much like the art that came before it. But it still has many problems associated with it, including biases, copyright, false or misleading information and vulgarity to name a few. Privacy advocates rightly worry about information being used without permission or protection. The software is a useful tool, but there are worries it can evolve into something else.