With the recent leap of innovation in the world of AI, it was only a matter of time before we saw the emergence of several applications derived from generative language models. Recently, one of them caught my attention, Boring Report: an application that relies on language models to remove any form of sensationalism from newspapers. The idea is to present information in a more sober way than traditional media by decoupling the superfluous emotional language from the facts. While the concept may seem like a groundbreaking way of getting informed, it's nonetheless quite worrisome. For the first time, there may be a real disconnect between the information produced by writers and the end consumer, as AI will act as a proxy to frame information according to more objective standards. Some journalists will likely produce articles using services like ChatGPT, although the final version would at least be reviewed by the publisher. Beyond the questionable reliability of generative tools, the suppression of sensationalism inexorably erases an emotional context interwoven with textual information, raising the question of how preserving the integrity of human-based content guides us to better understanding.
One of the distinguishing characteristics of human beings is that they tell themselves stories. If you're asked to introduce yourself, you're more likely to tell a few anecdotes that define you than to list your biological makeup or Big Five percentiles – I do this myself sometimes. Similarly, the way human history is explained is not based solely on numerical evidence, but rather on a recollection of historical testimony that fits together to form an overall narrative that is regularly reassessed as new evidence emerges. In the case of 9/11, it was initially perceived as nonsensical and shocking because of its completely unexpected nature, but if you think about it now in retrospect, you may come up with a more meaningful explanation of the chain of events. Today, most of us share a common interpretation of what happened during major historical events, but others may hold a radically different view. If you listen to conspiracy theorists, you'll generally find that many of them have a decent knowledge of the facts, but draw a radically different conclusion from them. This illustrates the difference between knowledge and understanding: whereas the former is detrimental to the pursuit of truth, the latter can be more useful from a historical perspective.
“The world is neither meaningful, nor absurd. it quite simply is, and that, in any case, is what is so remarkable about it.” ― Robbe-Grillet Alain.
The Age of Enlightenment introduced a revolutionary framework for understanding by placing more emphasis on reason and empirical evidence as opposed to subjective impressions, while the establishment of rigorous scientific methodology has provided consistent results over time. Although modern science remains far removed from the storytelling explanations of the ancient world, the two are more intertwined than we might think. In cosmology, the explanation of the “Big Bang” is usually introduced in a narrative format, in biology genes are often expressed as pursuing a self-replicating goal, and living organisms are commonly reduced to surviving. Although science has provided a set of methods for explaining the universe on empirical grounds, it can be quite irrelevant in the context of human interactions. A physician, for example, must have an understanding of the scientific knowledge system and the historical and social background of his patients. To the extent that he must effectively prescribe treatment, the choice of medication will be based on statistically significant evidence and the patient's belief system. A person skeptical of the pharmaceutical industry may be more inclined to follow the same prescription if it's plant-based – even though choosing a natural substitute for the same molecule is probably not healthier, given the process of extracting molecules from natural ingredients, which usually involves strong chemicals.
Certainly, narration has played a pervasive role in the way people communicate ideas, and in the same way, newspapers tell a story when they report facts. This undeniably prevents them from being completely objective, as they spread their own biases – sometimes exaggerated to the point of intellectual dishonesty. When this occurs, it makes people more susceptible to distrust and often results in a more selective assessment of whether to trust newspapers or, in more racial cases, to withdraw from them entirely. The growing distrust of media institutions is rarely without consequence, as it drives people to consume information in other ways. The advent of the Internet has enabled the decentralization of communication channels, and the emergence of social media has provided a centralized way to consume a vast number of fragmented pieces of information from multiple sources. Due to their quantitative approach, popular social platforms such as Facebook or Twitter have fundamentally changed the patterns of information consumption by driving attention towards short bursts. More recently, applications derived from language models, such as Boring Report, extend this principle by agnostically synthesizing information from mainstream media. Their supposedly neutral and quantitative approach has some inherent seductive aspects, such as freeing the reader from the alienation of mainstream rhetoric, which can understandably be perceived as superior and empowering.
However, one of the limitations of using massive information channels like social platforms is the lack of control over how information is selected, which can lead to a misleading and incomplete framing while fooling the reader into developing a comprehensive point of view. Imagine someone whose main source of information is scrolling through a feed of headlines, and who keeps coming across a bunch of studies denying the effects of global warming, and as a result, updates their belief system towards skepticism. On the other hand, a well-informed climate scientist would likely internalize the same stream of information differently, and consequently not shift their belief system as much. Similarly, many people have become familiar with following a range of experts or political pundits on Twitter, which may seem like a niche strategy to be at the forefront, but often results in content that lacks substance and is rarely representative of the consensus. Because social platforms' algorithms tend to prioritize provocative content at the expense of balanced ones, it's not uncommon for controversial figures to get the most attention. The infamous and controversial author Bjørn Lomborg, who has written subversive books on climate change based on a counterfactual narrative, has a disproportionately large online influence compared to the rest of the scientific community, even though most of his work has been massively debunked.
As generative AI tools gain popularity for a variety of uses related to educational purposes, their ability to produce a meaningful analysis of recent events remains unfounded. The techno-utopian idealization of artificial intelligence as a replacement for traditional media institutions is a dangerous proposition, since their unpredictable nature, due to an arbitrary selection of semantics, inevitably perpetuates a distorted overview and makes them untrustworthy candidates. On the other hand, a large number of media are overwhelmingly uninteresting and repetitive, and comparatively, classical news feeds probably do a better job of informing the reader. However, it's legitimate to question the holistic perspectives of newspapers like The New York Times when they have repeatedly demonstrated bias and misrepresented social issues by dismissing parts of the scientific evidence that don't fit a particular agenda. That said, I think it's a misconception to expect newspapers to be rational actors at the cutting edge of objective standards. They are not terrible either, but they are more useful for developing understanding through a narrative format. Instead of focusing solely on evidence, their main asset is to present a worldview in which you can find building blocks to help you build your own. Sadly, I see more and more people who don't see the value in this anymore, preferring to opt for a fact-only diet, erratically collecting random pieces of information without necessarily having the glue to tie them together. It's difficult to see how this approach can be a valuable conduit for constructing a meaningful interpretation.
I enjoy using social media, especially Twitter because it makes me laugh at memes, the latest scandals, or the juxtaposition of white supremacist and antifa threads. But I can't say with confidence what I've learned from using them. I've probably discovered a bunch of interesting articles through social media, but that's not enough to justify any significant beneficial outcome from using them. For the amount of time I've spent on these platforms, I should have accumulated a tremendous amount of knowledge, but unfortunately, most of the understanding that I've built upon is almost vacuous. This made me more concerned about how the algorithmic synthesis of articles through a fact-based format could lead to a better way of being informed. Similarly, summarizing books seems more useful for cramming for exams than for developing in-depth understanding. In the end, using AI to synthesize newspapers seems to combine both the negative aspect of reductionism and the unpredictability of language models. Over the past decade, we've increasingly popularized efficient ways of ingesting information, from speed-reading apps to 140-character limited threads, but this innovative trend might have been the main driver behind the age of misinformation.