Results: Exploring slants in a directed inquiry-based conversation
To further illustrate the biases, let us delve deeper into the example of my recent chat with ChatGPT regarding the FDA approval of an oral contraceptive pill containing Progestin. While major media outlets covered this news in a predictable way, curiosity prompted us to investigate the possible health risks associated with hormone-based pills. Recognizing the hormonal nature of such contraceptives, it was reasonable to anticipate the existence of risks and seek comprehensive information on the topic. To be fair though, the approach of using oral contraceptives has been in vogue for decades and has proved helpful in supporting women and their reproductive health.
However, as we embarked on a casual chat, we quickly discovered that the information presented was far from comprehensive or unbiased. The chat grew increasingly in favor of the use of the approved pills while I was seeking information, in particular, about the risks associated with its use. Even pointed requests to provide links from Pubmed failed to give satisfactory results. When we countered it by providing it a copy of an abstract text from a very good recent review paper (that itself was a result of many meta-analyses and papers), it evaluated the paper well while still being defensive about what it said earlier. Finally, it had no choice but to accept there are different sides to the issue as well.
The societal implications, politics, and ensuing interests surrounding birth control contribute to an inherent imbalance in the available literature. This imbalance can result in a lack of representation of all sides of the issue, hindering individuals’ ability to access a diverse range of viewpoints and evidence.
In this particular case, the push to promote the use of oral contraceptive pills, driven by factors such as gender equality, reproductive rights, and public health initiatives, can influence the information that surfaces in search results. As a consequence, the chat algorithms may prioritize sources that align with the prevailing narrative, emphasizing the benefits and downplaying potential risks associated with hormonal contraceptives. This can inadvertently lead to an incomplete and skewed understanding of the topic, as critical perspectives and studies highlighting the risks may be overshadowed or marginalized.
These biases can be further compounded by political and allied interests that seek to shape the discourse surrounding birth control. Various stakeholders could attempt to manipulate search results, either directly or indirectly, to direct the users toward their agendas. As a result, individuals increasingly relying on chatbot conversations may struggle to access well-rounded and unbiased information about the potential health risks associated with oral contraceptive pills.
This imbalance in the available literature underscores the importance of critically evaluating information obtained through these chatbots. It highlights the need for individuals to be aware of the biases that can be inherent in search results and to actively seek out diverse sources of information. By consulting reputable scientific journals, academic research databases, and trusted healthcare resources, individuals can obtain a more comprehensive understanding of the risks and benefits associated with oral contraceptive pills.
Moreover, this example demonstrates the limitations of relying solely on internet searches for accessing nuanced information on public health topics. It emphasizes the significance of seeking guidance from healthcare professionals who possess the expertise to navigate and interpret scientific literature objectively. Engaging in open and informed discussions with healthcare providers allows individuals to receive personalized advice, address specific concerns, and obtain a more holistic view of the risks and benefits of oral contraceptive pills.
Discussion
Chatbots operate based on sophisticated algorithms that analyze user queries and generate responses. However, these algorithms are not immune to biases, as they are likely to be designed to prioritize certain information sources and viewpoints. For example, in the case of oral contraceptives, the algorithm may favor sources that emphasize the benefits while downplaying or omitting information about potential risks associated with their use, especially when their use may be desired by public health agencies for the betterment of women and reproductive health. For example, looking at the increased risk of developing cancer over use for a long period of time especially in vulnerable demographics (certain ethnicity). This is based on a recent user experience. Each user experience may surely vary. Consequently, the chatbot may provide incomplete or skewed information, hindering individuals’ ability to obtain a comprehensive understanding of the topic.