OpenAI has proposed GPT-4 for the role of impartial moderator

OpenAI has revealed that its developments have the potential to alleviate one of the most difficult and frustrating technical tasks - the task of content moderation. The large language model GPT-4 can replace tens of thousands of human moderators, demonstrating the same high accuracy and even greater consistency.

Aug 16, 2023 - 13:47
 45
OpenAI has proposed GPT-4 for the role of impartial moderator
Image source: ilgmyzin / unsplash.com

OpenAI announced on its corporate blog that it is already using GPT-4 to refine its own content policies, labeling and individual decisions. The developer even cited a number of advantages of artificial intelligence over traditional approaches to content moderation.

First, humans interpret policies differently, whereas machines are consistent in their judgments. Moderation guidelines can be voluminous and constantly changing - humans take a long time to learn and adapt to change, while large language models implement new policies instantly. Second, GPT-4 is able to help develop a new policy within a few hours. The process of labeling, gathering feedback and refinements usually takes weeks or even months. Third, we cannot forget about the psychological well-being of employees who are constantly exposed to malicious content as part of their jobs.

Popular social networks have been around for almost two decades, not to mention other online communities, but content moderation is still one of the most challenging tasks for online platforms. Meta✴, Google, and TikTok have developed whole armies of moderators who have to review terrible and often traumatizing content. Most of them live in developing countries with low wages, work in outsourcing companies, and try to cope with psychological trauma on their own with minimal help.

OpenAI itself still relies heavily on human resources - it hires workers from African countries to comment on and mark up content. This content can be problematic, making the work of such employees stressful at low wages. And the aforementioned Meta✴, Google, and TikTok utilize in-house AI models in addition to human moderators, so OpenAI's proposed solution will prove useful for smaller companies that don't have the resources to develop their own systems.

Each platform recognizes that there are no perfect moderation mechanisms on a large scale: both humans and machines make mistakes. Error rates may be low, but there are still millions of potentially harmful publications making their way into the public domain - and probably an equal number of harmless content being hidden or removed. Plus there remains a "gray area" of inaccurate or offensive content that meets the requirements of moderation policies but baffles automated systems - such content is difficult for even human moderators to evaluate, so machines often make mistakes. The same applies to satirical publications and materials documenting crimes or abuse of power by law enforcement officers. Finally, we should not forget about the internal problems characteristic of AI systems - "hallucinations" and "drift", which can also complicate the work of AI moderators.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow