Homework
Homework 6
- Assigned: December 3rd (Tuesday)
- Due: December 15th (Sunday) by 11:59pm
Here's what you need to do:
- Get the glass dataset. Click on the "data folder" link and download glass.data.
- Write code to remove the first attribute (which is a unique ID), get the remaining attributes as reals, and extract the class label (which is the last element in each line of glass.data).
- Use scikit's SVC class with the default parameters to train a classifier and report the accuracy of that classifier on the training set.
- Use scikit's PCA class to reduce the dimensionality of the dataset to 2 dimensions and do the following: (1) plot the data colored by class label in the reduced space; (2) report the percent of variance accounted for in that reduced space; (3) train a classifier (SVM) on the reduced data and report the training set accuracy.
- Increase the dimensionality of the reduced space by 1 and repeat the step above (reporting all outputs) until the training set accuracy in the reduced space is within 4 points of the accuracy in the full space.
- When you've found that dimensionality, inspect the components and write a few sentences about what features each of them seems to be emphasizing. Explain how you arrived at that conclusion.
Homework 5
- Assigned: November 19th (Tuesday)
- Due: December 1st (Sunday) by 11:59pm
Here's what you need to do:
- Get graph.tsv, which contains a file in which each line is a triple of the form: SOURCE_NODE <TAB> DESTINATION_NODE <TAB> WEIGHT. Note that source and destination nodes are integers, as are weights.
- Write one python function per item in the list below that uses
Spark to compute the desired information. Each function should
accept two arguments: a path to the graph.tsv file
and a path to an output directory. Use
Spark's saveAsTextFile() to save the final RDD to the
specified output directory. Note that, by default, if the target
output directory already exists when you attempt to save to it,
you'll get an error. One solution is to remove the target directory
between runs.
- For each node, compute the outdegree (number of outgoing edges) and output the (node, count) pairs in sorted order by node. The code should be in a single function named outdegree().
- For each node, compute the sum of weights of incoming edges and output the (node, weight_sum) pairs in order sorted by node. The code should be in a single function named weight().
- For each node X, find a list of all other nodes Y such that there is an (X, Y) edge in the graph and a (Y, X) edge in the graph, and output the (X, [Y1, Y2, ..., Yn]) pairs in order sorted by X. Hint: I solved this by building two RDDs, one in which edge source nodes are keys and destination nodes are values, and one in which edge destination nodes are keys and source nodes are values. The code should be in a single file named pairs().
Homework 4
- Assigned: November 4th (Monday)
- Due: November 14th (Thursday) by 11:59pm
- 10.2 (k-means)
- 10.10 (BIRCH). Note that you'll have to read about the OPTICS algorithm in the text.
- 10.12 (density-based clustering)
- 10.18 (constraints and clustering) For this problem I'm just looking for you to propose some ideas. In addition to what's asked for in the book, say something about how your proposed modifications impact the computational complexity of the clustering algorithm.
Homework 3
- Assigned: October 22nd (Tuesday)
- Due: October 31st (Thursday) by 11:59pm to the TA via slack.
Load the breast cancer dataset using https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html.
Here is what you need to do:
- Split the data into train and test sets, each with half of the data.
- Run LogisticRegression and Support Vector Machines with linear and RBF kernels. Use the default parameters for each of them. Report test set accuracy for each model.
- Use scikit's StandardScaler to standardize the data and re-run the experiment above. Report the results and, if there is any appreciable difference, explain why you think it occurred.
- Using the unscaled data, tune the parameters of each model using GridSearchCV. For Logistic Regression and SVC models tune the C parameter. Also tune the gamma parameter for the RBF kernel. Do the results improve? Visualize the accuracy as function of the parameters for all three models. Do that in whatever way is the most informative.
- Look at the coefficients of the LogisticRegression and Linear Support Vector Machine models and explain what they say about the features, which ones are most important, and what role they play.
Homework 2
- Assigned: September 24th (Tuesday)
- Due: October 8th (Tuesday) by 11:59pm
In this homework you'll gain experience installing and using SQL databases. Here are your tasks.
Install MySQL: Click here for directions.
Create a MySQL user for yourself:
- Run mysql --user root --password at the command line, enter the root password for MySQL that you created when installing MySQL, and run the following queries, replacing username and password as appropriate.
- CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';
- GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' WITH GRANT OPTION;
- QUIT;
- Then run mysql -p and enter the password you just specified.
- Try SHOW DATABASES; and you should see a few system DBs listed.
Install the Retailer sample database: Go here for instructions on getting the sample database. The result is a .zip file that, when extracted, gives you a .sql file named mysqlsampledatabase.sql, which is nothing more than a series of SQL commands/queries. To run it, do this: mysql -u username < mysqlsampledatabase.sql. As usual, you'll have to enter your password
To check that everything worked, do SHOW DATABASES in mysql and if you see one named classicmodels, then all is well. The web page that contains the link for the sample database has information on the tables and their fields.
Write each of the following queries: For each query, turn in the query and the result of running it on the Retail database that you just created.
- Count the number of employees whose last name or first name starts with the letter 'P'.
- For how many letters of the alphabet are there more than one employee whose last name starts with that letter? Hint: The substr function will be useful here in a GROUP BY clause.
- How many orders have not yet shipped?
- How many orders where shipped less than 2 days before they were required?
- For each distinct product line, what is the total dollar value of orders placed?
- For the first three customers in alphabetal order by name, what is the name of every product they have ordered?
Install the python connector for MySQL: In this part of the homework you'll get experience running queries from python code and write a simple program to extract the structure of a MySQL database.
- Go here to download the python connector.
- You can also 'pip install mysql-connector'.
- You can test the installation by running python and trying import mysql.connector. If that does not produce an error, you're good to go.
Your task is to write a python program that takes a single command line argument, which is the name of a database, and prints the names of all of the tables in that database along with the number of rows in each table. Read through the documentation on the python connector here to see how to create a connection, issue a query, and walk over the results. For this exercise you'll submit your python code, which should all be in one file, along with the output of running your program on the sample database you installed earlier. Hint: The SHOW TABLES query will be useful here.
Put all elements of the homework into a single file and submit it via Slack to the TA by the due date/time.
Homework 1
- Assigned: September 10th (Tuesday)
- Due: September 24th (Tuesday) by 11:59pm
In this homework you'll gain experience with Open Baltimore data, Jupyter notebooks, and pandas. Jupyter auto-saves notebooks with some regularity, but I also tend to "Save and Checkpoint" periodically on the File menu because you can always revert to a checkpoint.
You will submit your homework as a notebook by uploading it as a file into the Slack channel for the TA (Abbasi Koohpayegani) by 11:59pm the day the assignment is due. To do that, click on the + icon beside "Direct Messages" and start typing his name. As some point you'll see his name in a list of users below where you are typing. Click on his name. Once you're in a chat with Abbasi, click the big + next to the space where you enter a message, click on "Upload File" and then choose the notebook you want to submit. The system will then allow you to add a message, which you should make "Homework 1 submission for NAME".
To add comments in your notebook, which you're asked to do to explain your thinking in a few places, you'll use markdown syntax in the cell. Look at the Basics tab on the main markdown page and it will tell you everything you need to know. Type your comments in a notebook cell and then either do "Cell" - "Cell Type" - "Markdown", or type CTRL-M M in the cell.
Choose any dataset from the Open Baltimore collection except for variations of the Victim Based Crime Data that I explored in my DataExploration notebook. Choose a dataset that allows you to perform the following tasks:
- Load the data into a Jupyter notebook. Explain briefly (using markdown) what the Open Baltimore website says about the dataset. Do a head(50) and tail(50) on the data frame after loading the data. Explain any observations you can make about the dataset and its quality from just that output.
- Explore the data to understand what's in each of the columns.
If the dataset has a very large number of columns (more than 10) you
can choose a smaller subset of columns with which to work, but
justify why you selected those columns. For each of the columns,
but no more than 5 total columns:
- Describe what the column contains (e.g., the time at which a crime was committed, or the last sale price of a house) in prose
- Determine whether the column contains missing data, make a decision about how to handle them, and implement that decision
- Do the same for outliers or other unusual values. Determine if they exist and, if so, implement an approach to dealing with them
- Explain anything else interesting or unusual about the data in the column that you observed
- Create scatter plots of pairs of variables that you think might
be related, and for two such plots do the following:
- Explain why you think the two variables might be related
- Show the scatter plot
- Explain what the plot says, if anything, about the relationship between the variables. The explanation should be semantic. That is, don't say "x gets bigger when y gets bigger", say, for example, "it looks like crime increases later in the week, presumably because people are out later in the week and on the weekends".
- Pick one variable to be a dependent variable, and two others to be predictor variables. These choices should be based on your exploration above. Generate a 2-D or 3-D plot that shows whether the predictor variables actually convey information about the value of the dependent variable. Explain clearly why you think they do or do not by referring to the plot.