Instructions

In [my book "Searches"], I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It's a dynamic that makes us complicit in big tech's accumulation of wealth and power: we're both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues.

People often describe chatbots' textual output as "bland" or "generic" - the linguistic equivalent of a beige office building. OpenAl's products are built to "sound like a colleague", as OpenAl puts it, using language that, coming from a person, would sound "polite", "empathetic", "kind", "rationally optimistic" and "engaging", among other qualities. OpenAl describes these strategies as helping its products seem "professional" and "approachable". This appears to be bound up with making us feel safe

Trust is a challenge for artificial intelligence (Al) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAl found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft's Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren't flukes. Research suggests that both tendencies are widespread.

In my own ChatGPT dialogues, I wanted to enact how the product's veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech - including editing my description of OpenAI's CEO, Sam Altman, to call him "a visionary and a pragmatist". I'm not aware of research on whether ChatGPT tends to favor big tech, OpenAl or Altman, and I can only guess why it seemed that way in our conversation.

OpenAl explicitly states that its products shouldn't attempt to influence users thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: "The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.".

OpenAl has its own goals, of course. Among them, it emphasizes wanting to build Al that "benefits all of humanity". But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are a goal that is easier to accomplish if people see those products as trustworthy collaborators.

Question 1.

The author compares Al-generated texts with "a beige office building" for all of the following reasons EXCEPT:
A
Al tends to blame its training data when scrutinised for its biases.
B
Al generates generalised responses that lack specificity and nuance.
C
Al aims to foster a feeling of trust and credibility among its users.
D
Al-generated texts often exhibit a warm, polite, and collegial tone.
Rate this Solution
Next Question

Question Explanation

Video Explanation
No video explanation yet — we're on it and uploading soon!
XAT 2026 Full Course - Enroll Now for Best XAT Preparation
CAT LRDI 100 Recorded Course - Master Logical Reasoning and Data Interpretation
HOME
XAT Sankalp Sale
Quant Revision Book
More
YoutubeWhatsapp