GLTR
Forensically detects auto-generated text.
GLTR is a powerful tool developed by the MIT-IBM Watson AI lab and HarvardNLP which can detect automatically generated text using forensic analysis. It works by visually analyzing the output of OpenAI's GPT-2 117M language model and ranking each word according to how likely it was produced by the model. This allows GLTR to immediately highlight words that are most likely generated, making it easy to identify computer-generated text. Additionally, three histograms provide further evidence as to whether a text has been artificially created. The tool can be used to detect fake reviews, comments or news articles which could pass off as human-written text without an expert reader. GLTR is available for use through a live demo and its source code can be found on Github, along with an accompanying ACL 2019 demo track paper which was nominated for best demo.
Overall, GLTR is an incredibly useful tool for detecting computer-generated texts which would otherwise pass undetected. By providing direct visual indication of the likelihood of each word in a text being artificially generated and additional evidence from three histograms, GLTR gives researchers the ability to easily recognize when texts have been created by machines rather than humans.
Would you recommend GLTR?
Help other people by letting them know if this AI was useful.
Authentication required
You must log in to post a comment.
Log in