ChatGPT and Beyond: How Artificial Intelligence Disrupts Public Debate
The increasing polarization of the public debate has become a concern for many in recent years. Politics in the United States is a prime example of this, but similar breakdowns in decorum have been occurring globally. Some have attributed this development to the rise of social media, but there is a new player on the scene that demands attention: artificial intelligence (AI). Specifically, the use of bots and sock puppets on social media platforms has raised significant alarm, contributing to discord and divisiveness.
The ability to distinguish between genuine human voices and digital propaganda tools has become increasingly challenging in online interactions. This difficulty is set to increase with the deployment of artificially intelligent general-purpose language models like ChatGPT. We are still only scratching the surface of a significant challenge to the traditional workings of public debate. To ensure that the concept of a constructive and inclusive public debate endures and flourishes in the years to come, we must attempt to navigate these challenges and find a viable way forward.
How Artificial Intelligence is Polarizing the Public Discourse
It is important to understand that AI isn’t doing anything by itself. Someday a nebulous artificial entity may attempt to sow discord among humans to further its own dark designs, but we are not there yet. In fact, we are not even close. The current forms of AI are tools for humans to use, more like a hammer or a typewriter than a self-conscious entity with internal drive and ambitions. ChatGPT and other similar technologies are inherently unable to work in a meaningful manner without direct human input to guide them. Therefore, it is not artificial intelligence itself that polarizes public discourse but rather individuals who exploit this technology for their polarizing agendas.
Bad actors deliberately attempting to derail public discourse is nothing new. Hecklers, agent provocateurs, and bullies have been around for far longer than social media, the internet, or even printed letters. What has changed is the tools they have access to. Effective tools tend to act as a force multiplier, allowing people to achieve far greater results with similar effort and skill. In the context of public debate, having access to AI-powered tools is allowing a small group of bad actors to have a major negative impact on any discourse they choose to target. Specifically, utilizing AI allows even fringe messages to be amplified and reshared until they seem almost mainstream. This can be done through large amounts of artificial engagement with certain messages or even through the creation of false profiles that use autogenerated content to seem more real. Furthermore, emerging technologies such as AI-produced art pose new risks to public debate, particularly through the creation of deepfakes, further blurring the line between reality and deception.
Beyond ChatGPT: The Next Phase of Artificial Intelligence
With the next generation of artificially intelligent tools now arriving in the form of ChatGPT and other chatbots, the public debate is set to face even greater challenges. In many cases, it is already virtually impossible to know if you are communicating with a machine or a human via text, unless the AI explicitly discloses its non-human nature or reveals telltale signs in response to careful probing. As this technology has just recently been rolled out, we don’t fully know what its wider impact will be yet. What we do know is that this development will continue. The next phase of AI development will bring even more sophisticated and powerful tools that will almost certainly impact the public debate even further. It is highly plausible that meaningful debate with strangers will be impossible in the digital realm in the near future, as bad actors can easily flood any channel of debate with noise.
What we may also see is AI tools being harnessed to protect the debate to a greater extent. Artificially intelligent systems are already better at identifying bots and good falsifications than most humans. One possibility is that public debate may be restricted to offline settings or will have to be closely monitored by AI facilitators. This, of course, poses new challenges, as free speech will then essentially be limited to what these AI facilitators will allow. It is possible, though, that future AI models with stronger understandings of human interactions will be able to foster debates that strive for greater empathy and mutual understanding.
Distinguishing Human Voices from AI
If maintaining a human-centered debate remains a crucial aspect of future discourse, some significant changes will have to happen. Perhaps most important of all is establishing a reliable way to distinguish artificial voices from human ones. The primary means to achieve this will likely involve AI-powered fact-checking and verification tools, as it is the only way to keep up with the spread of artificial voices. It is even possible that general misinformation and misrepresentations can be kept in check with such tools, as live fact-checking becomes more viable.
Another important feature of public debates in the future should be transparency. Participants in public debates should be required to provide increased transparency regarding their identities and the technologies they are using. In a world where you must doubt even the evidence of your own eyes, a very high level of information disclosure will be necessary for public debate to be meaningful. Of course, this can also be achieved by moving more of the debate back offline and into the real world, but it is highly unlikely that debate will stop taking place on social media anytime soon.
Charting a Path Forward
There are some major changes that should start happening sooner rather than later. Given that a significant portion of public debate has moved online, and the online forums where it is happening are privately owned, the owners of those spaces will have to take on much more responsibility for hosting constructive discourse. If they refuse to do this, the government will have to step in to regulate them or even take away their ownership of some forums. A credible threat of increased government involvement will probably be enough to make most companies step up and make real changes. These companies are ultimately the ones best positioned to roll out reliable systems for detecting AI-generated content and other forms of manipulation, but many will only do it if pressured to do so.
Another crucial change that needs to happen is the widespread promotion of critical thinking skills and digital literacy among all members of society. These skills should probably be taught anyway as they are increasingly important in the modern world, but they will be essential to maintaining a functional debate in the coming years.
Lastly, individuals must embrace and actively engage with these new technologies. Technology is not inherently good or bad; it is just used in good or bad ways. As stronger ethical guidelines and standards for the use of AI are developed and rolled out, people need to become accustomed to coexisting with artificial entities. AI is already present in many online interactions but still tends to be hidden from sight. In the future, AI needs to step out of hiding and be present as something people actively and consciously engage with and utilize in everyday life. Only then will people be able to distinguish between technology used for good and for bad.
Conclusion
Modern digital and telecommunication technologies have already had a huge impact on the public debate. This impact is set to intensify with the emergence of AI technologies. As a society, we need to face these changes head-on and adapt to them. Otherwise, we stand to lose the concept of reaching mutual understanding and consensus through debate altogether. The technologies that present this threat also present solutions and exciting opportunities for improving the debate if properly embraced and used for the power of good. This is really a matter of choice. If we don’t put in the effort things may well become much worse than they already are, but if we fight to defend and improve the public debate, using AI as a tool in this fight, we have the potential to improve the state of public debate significantly.