UMBC CMSC 471 Intro to AI, Spring 2022
image of bot

UMBC CMSC 471 Spring 2022
Introduction to Artificial Intelligence


HW1: AI Considered Harmful?

out: Tue 2022-02-1; due: Thr 2021-02-10 23:59 EDT

click here to get hw1 repo


Advances in AI can improve our lives in many ways, but it is possible that they could make things worse for some or even all people. Concerns about potential negative effects of technology are not new. The Luddites in the early 19th century are a famous example. Many plays, books and films in the past 100 years are built around the theme that intelligent robots are a great danger. Examples include the robot in Karl Capek's play R.U.R., Maria in Fritz Lang's film Metropolis, the Nexus-6 model androids in P.K. Dick's novel, Do Androids Dream of Electric Sheep and the time-traveling Terminators.

Recent advances in computing and AI have led some technologists to be concerned that it will lead to the creation of superintelligences whose cognitive abilities will surpass humans' in almost all areas and that this could pose an existential risk for humanity. The idea is sometimes tied to an anticipated technical singularity, which Wikipedia describes as

The technological singularity—or simply the singularity[1]—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3] According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradeable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Another concern is that AI systems will be trained from available data that will lead them to learn some of the same gender and racial biases that exist in current human societies. Take a look at this short article of a recent expert panel discussing the problem, and a paper by UMBC's Cynthia Matuszek and colleagues on "Unequal Representation and Gender Stereotypes in Image Search Results for Occupations". One more worry is that machine learning and AI systems will be used by governments and businesses to create a new dystopia where we are tracked and monitored for information that can be used to 'control and to manipulate us in novel, sometimes hidden, subtle and unexpected ways'. Check out some TED talks like, we're building a dystopia just to make people click on ads, by former UMBC professor Zeynep Tufekci and The danger of AI is weirder than you think by Janelle Shane.

Some worry that progress in AI could someday result in human extinction or some other unrecoverable global catastrophe. Wikipedia has several pages relevant to the topic, including Existential risk from artificial general intelligence.

While many high profile figures including Elon Musk, Bill Gates and Stephen Hawkings have expressed concern, many A.I. researchers are not very worried.

What to do

Start by reading the following.

Pretend that you are writing a short opinion piece for The Retriever on the topic and explaining why we should or should not be worried about one or more dangers unleashed by AI. Your piece should be at least 300 words long and be written so it could be understood by the Retriever's audience. It should explain some aspect of the issue and why many people are concerned about it today, mention arguments on both sides of the controversy, present and argue for your own opinion. Oh, and think up a catchy headline.

The assignment is due before 23:59:59 EDT on Thursday, February 10.

  1. If you do not have an account on GitHub, create one. Log into your GitHub account.
     
  2. You should get an message inviting you to accept a HW1 assignment on GitHub Classroom. If not, you can click on this link. Use the URL you get by email to accept the assignment, which will create a private repository for this assignment in your GitHub account. You can then clone that repository on a computer of your choice. When asked, please link your name in the roster on our GitHub classroom site to your github account. If you are new to GitHub you can find some help here.
     
  3. Write your op-ed piece in Word, LaTex, or Google Docs or some system that can produce a pdf file. Your piece should have a title and a byline that identify you as the author, giving your name and UMBC email address. Put the pdf file in your local repository on your computer with the name oped.pdf.
     
  4. Summarize your op-ed piece in a tweet-length (i.e., at most 280 characters) string of text in the file tweet.txt and the post that same text on our Discord server in the hw1-tweets channel.

  5. Edit the README.md file in your cloned repo to add your name and UMBC user name.
     
  6. Use git to commit the files oped.pdf, tweet.txt and README.md and push the changes back to your GitHub repository. See here for more detailed instructions.
     
  7. Verify that your changes are now in your GitHub repo by visiting it on the web using your browser.