- Content Recognition System
- AI and literary language
Outsmarting ChatGPT-Generated Content Systems
Researchers from Stanford discovered that it’s easy to bypass or overcome text recognition systems created with ChatGPT-generated content by tricking them.
These ChatGPT-generated systems can be fooled or deceived without much difficulty, as found by researchers at Stanford. Their study revealed that it’s not very challenging to get around the text recognition systems made with ChatGPT.
The text you’re reading was written by a person, but sometimes your strength find content online that wasn’t created by a human.
There are AI programs like ChatGPT and Midjourney that can generate text and images, and they are freely available to the public. Because of these AI programs, it has become more difficult to tell if the ChatGPT-generated content you encounter online was made by a human or by a computer program.
These ChatGPT-generated programs have made it possible for anyone to create text and other content, which can make it challenging to distinguish between content made by humans and content made by AI.
Detecting AI Misuse
Computers using artificial intelligence, algorithms, and machine learning have been used in different areas like social media, science, ads, farming, and industries for a while now. But now, with ChatGPT, there is a lot of competition, and even students are using it for fraud. Some people also use artificial intelligence to write scientific articles. Because of this, systems have been developed to find out if something was made with artificial intelligence.
This is done to stop any misuse or harm that could happen. These ChatGPT-generated systems help prevent cheating and make sure that the content created using artificial intelligence is used responsibly.
Stanford University scientists wrote an article in a magazine called Patterns. They talked about how good computer programs made with artificial intelligence are at recognizing information.
Researchers from Stanford University looked at 91 essays written for the TOEFL (Test of English as a Foreign Language) by students from China, and 88 essays written by American eighth-grade students.
They used special software, including one called ChatGPT-Generated (GPTZero), to check if the essays were written by a computer or a human.
They discovered that only a small percentage, 5.1%, of the American student essays were identified as being written by a ChatGPT-generated computer. Surprisingly, when they checked the TOEFL essays written by humans, the ChatGPT-generated software wrongly thought they were written by a computer 61% of the time.
In fact, one of the ChatGPT-generated software programs said that 97.9% of the TOEFL essays were written by a computer.
Misguided Essay Attribution
The researchers went further into their investigation and discovered that some essays were mistakenly labeled as being ChatGPT-generated because the text had been changed.
They already knew that people who don’t speak English as their first language tend to use fewer words in their English writing because their vocabulary is not as large as that of native English speakers.
The content detector programs mistake this kind of ChatGPT-generated writing as being produced by artificial intelligence. So, if you use more advanced and creative language in your writing, these tools will not think it was written by a computer.
ChatGPT and literary language
Stanford scientists did another test and used computer programs that think like humans to check if a special software could tell if certain texts were made by computers or by people.
A group of people from Stanford University used ChatGPT, a smart computer program, to write answers to questions for students applying to American colleges.
They checked the answers made by the ChatGPT-generated content using different tools to determine their origin. The team discovered that approximately 70% of the answers were recognized as ChatGPT-generated.
However, they subsequently observed that by crafting the answers to resemble human writing more closely, the tools were unable to distinguish between content produced by the computer program and that generated by a human.
In another experiment, the group of scientists discovered that the software used by ChatGPT-generated to recognize its own generated text accurately recognized it only 3.3 percent of the time. You can achieve similar outcomes from these kinds of programs by using scientific summaries.
James Zou, a scientist at Stanford University and one of the people who wrote the paper, said, “We were surprised because these detectors didn’t work well and could be tricked easily by people who don’t speak English.”
The research asks a really important question: If tools that can find AI-made content are very easy to trick, then what’s the point of having these tools?
How to achieve better results
It doesn’t matter if smart computer systems that recognize content made by AI get outsmarted into doing this or if they themselves don’t do it correctly.
The important thing is that these tools have issues. One idea that shows promise for software that recognizes ChatGPT-generated content is to compare different pieces of information about a specific subject, like what humans and AI say, and then see if they can be put into the right categories. Zhu suggests that this way of doing things might be better and faster.
However, some experts think that the way programs detect content in GPT is not very strong when it comes to ChatGPT-generated content. This might actually make writing more creative and unique while challenging ChatGPT-generated content recognition systems.
If you find additional information, please refer to this article on Outsmarting ChatGPT-Generated Content Systems. Techcrunch