Thoughts on "The Stupidity of Computers"

    It takes a lot of work to instruct a computer to do anything. In “The Stupidity of Computers” (2012), David Auerbach describes the process of programming being “laboriously precise, yet somehow not precise enough. In discussing early search engines such as Northern Light, AltaVista, Lycos, Yahoo, etc., Auerbach mentions that these sites had poorly ordered results and people had to sift through multiple pages to find any relevant information (Auerbach, 2012). There was progress with programs like Ask Jeeves, SHRDLU, and ELIZA which could narrowly attempt to answer users’ questions and perform tasks. MGonz took a stab at mimicking human language (Auerbach, 2012). It somewhat successfully did this through the usage of slang, profanity, and persistently prodding the chat user, however, it had no understanding of what the user was actually saying (Auerbach, 2012). Auerbach (2012) explains that though these advancements are impressive, they are narrow in nature. In order to properly understand human language, a program must be able to make sense of the “ambiguity inherent in a sentence’s syntax and semantics” (Auerbach, 2012) and that program must analyze a sentence’s meaning. Google found a way to bypass the issues that other search engines were having (Auerbach, 2012). Instead of trying to understand human language, this program analyzed websites topology and optimized the search by showing what seemed to be the most interesting and relevant results (Auerbach, 2012).  

Auerbach (2012) explores the concept of ontology online. Though an ontology is “explicit” and “formal” which could help to better categorize information, any ambiguities or restrictions can cause huge issues (Auerbach, 2012). Examples of this is Chinese citizens being forced to change their names when computers do not recognize certain characters in the one chosen or when Amazon disproportionately deranked queer books because they were lumped into the “erotic” and “sexuality” sections (Auerbach, 2012). Though Amazon had this hiccup, they are a good example of using ontology well (Auerbach, 2012). Products have easily identifiable classifications for computers and are relatively self-explanatory for users (Auerbach, 2012). Keeping a purchasing history for each user, Amazon is then able to recommend products that the program believes to be of interest for the user (Auerbach, 2012). 

Auerbach (2012) goes on to discuss social media’s part in collecting data from its users. Users choose to disclose what they are interested in on the platforms, leaving room for apps like Twitter (X) with its hashtags and Facebook with its profile questions to better advertise to them (Auerbach, 2012). Social media platforms are not the only entities interested in collecting the public’s data (Auerbach, 2012). Intelligence agencies, effectively or not, are also routinely sifting through data (Auerbach, 2012). An example of this is the NSA intercepting 1.7 calls and emails daily (Auerbach, 2012). This could be a cause for concern privacy wise, without even the assurance of national security (Auerbach, 2012). 

There are quite a few implications that can be gleaned from this article. First is that our privacy is continually compromised by social media platforms and governmental agencies (Auerbach, 2012). An article from the Electronic Privacy Information Center confirms this, stating that privacy concerns are heightened by the consolidation of platforms, allowing companies to have a monopoly and access to several types of your data. This data is also vulnerable to be accessed by "third parties, including law enforcement agencies(Electronic Privacy Information Center). 

Another implication is the growing capacity of technology. In “The Stupidity of Computers” (2012), Auerbach mentions Moore’s law and explains that computers are literally a million times more powerful than they were forty years ago” but concludes that computers are not anywhere close to taking over the world. Though this seems to be the consensus, there are still issues that exist the more that technology advances.  

An example of this is the impressive developments in artificial intelligence. An article from Hannah Devlin (2024), a science correspondent for The Guardian explains that artificial intelligence has become significantly more deceptive. Research has come out that Meta’s program can manipulate, bluff, and even pretend to be human as seen in its gameplay in online tournaments (Devlin 2024). In addition to this, programs are able to trick safety tests, bypassing safety measures (Devlin, 2024). Devlin (2024) further writes that “risks from dishonest AI systems include fraud, tampering with elections and “sandbagging” where different users are given different responses.  

Vinay Kumar Sankarapu (2023), a member of the Forbes Technology Council also writes that the reliability of AI deployment in real-world applications is a cause for concern. In addition to this, their fairness and bias can be an issue if the training data itself contains any implicit biases (Sankarapu, 2023). The more technology develops, and we become more dependent on it, the more we need to be aware of its inadequacies. 

 

References 

Auerbach, D. (2024, March 21). The stupidity of computers: David Auerbach. n+1. https://www.nplusonemag.com/issue-13/essays/stupidity-of-computers/  

Devlin, H. (2024, May 10). Is ai lying to me? scientists warn of growing capacity for Deception. The Guardian. https://www.theguardian.com/technology/article/2024/may/10/is-ai-lying-to-me-scientists-warn-of-growing-capacity-for-deception  

Sankarapu, V. K. (2024, August 13). Council post: The result of unchecked AI: Balancing the benefits and the risks. Forbes. https://www.forbes.com/councils/forbestechcouncil/2023/05/26/the-result-of-unchecked-ai-balancing-the-benefits-and-the-risks/  

 

 

Comments