Building a Data Science Portfolio: Storytelling with Data

Categories: Data Science Guest

The following post by Vik Paruchuri, founder of data science learning platform Dataquest, offers some detailed and instructive insight about data science workflow (regardless of the tech stack involved, but in this case, using Python). We re-publish it here for your convenience.

Data science companies are increasingly looking at portfolios when making hiring decisions. One of the reasons for this is that a portfolio is the best way to judge someone’s real-world skills. The good news for you is that a portfolio is entirely within your control. If you put some work in, you can make a great portfolio that companies are impressed by.

The first step in making a high-quality portfolio is to know what skills to demonstrate. The primary skills that companies want in data scientists, and thus the primary skills they want a portfolio to demonstrate, are:

  • Ability to communicate
  • Ability to collaborate with others
  • Technical competence
  • Ability to reason about data
  • Motivation and ability to take initiative

Any good portfolio will be composed of multiple projects, each of which may demonstrate 1-2 of the above points. This is the first post in a series that will cover how to make a well-rounded data science portfolio. In this post, we’ll cover how to make your first project for a data science portfolio, and how to tell an effective story using data. At the end, you’ll have a project that will help demonstrate your ability to communicate, and your ability to reason about data.

Storytelling with Data

Data science is fundamentally about communication. You’ll discover some insight in the data, then figure out an effective way to communicate that insight to others, then sell them on the course of action you propose. One of the most critical skills in data science is being able to tell an effective story using data. An effective story can make your insights much more compelling, and help others understand your ideas.

A story in the data science context is a narrative around what you found, how you found it, and what it means. An example might be the discovery that your company’s revenue has dropped 20% in the last year. It’s not enough to just state that fact – you’ll have to communicate why revenue dropped, and how to potentially fix it.

The main components of storytelling with data are:

  • Understanding and setting the context
  • Exploring multiple angles
  • Using compelling visualizations
  • Using varied data sources
  • Having a consistent narrative

The best tool to effectively tell a story with data is Jupyter notebook. If you’re unfamiliar, here’s a good tutorial. Jupyter notebook allows you to interactively explore data, then share your results on various sites, including Github. Sharing your results is helpful both for collaboration, and so others can extend your analysis.

We’ll use Jupyter notebook, along with Python libraries like Pandas and matplotlib in this post.

Choosing a Topic for Your Project

The first step in creating a project is to decide on your topic. You want the topic to be something you’re interested in, and are motivated to explore. It’s very obvious when people are making projects just to make them, and when people are making projects because they’re genuinely interested in exploring the data. It’s worth spending extra time on this step, so ensure that you find something you’re actually interested in.

A good way to find a topic is to browse different datasets and seeing what looks interesting. Here are some good sites to start with:

  • Data.gov – contains government data.
  • /r/datasets – a subreddit that has hundreds of interesting datasets.
  • Awesome datasets – a list of datasets, hosted on Github.
  • rs.io – a great blog post with hundreds of interesting datasets.

In real-world data science, you often won’t find a nice single dataset that you can browse. You might have to aggregate disparate data sources, or do a good amount of data cleaning. If a topic is very interesting to you, it’s worth doing the same here, so you can show off your skills better.

For the purposes of this post, we’ll be using data about New York city public schools, which can be found here.

Pick a Topic

It’s important to be able to take the project from start to finish. In order to do this, it can be helpful to restrict the scope of the project, and make it something we know we can finish. It’s easier to add to a finished project than to complete a project that you just can’t seem to ever get enough motivation to finish.

In this case, we’ll look at the SAT scores of high schoolers, along with various demographic and other information about them. The SAT, or Scholastic Aptitude Test, is a test that high schoolers take in the US before applying to college. Colleges take the test scores into account when making admissions decisions, so it’s fairly important to do well on. The test is divided into three sections, each of which is scored out of 800 points. The total score is out of 2,400 (although this has changed back and forth a few times, the scores in this dataset are out of 2,400). High schools are often ranked by their average SAT scores, and high SAT scores are considered a sign of how good a school district is.

There have been allegations about the SAT being unfair to certain racial groups in the US, so doing this analysis on New York City data will help shed some light on the fairness of the SAT.

We have a dataset of SAT scores here, and a dataset that contains information on each high school here. These will form the base of our project, but we’ll need to add more information to create compelling analysis.

Supplementing the Data

Once you have a good topic, it’s good to scope out other datasets that can enhance the topic or give you more depth to explore. It’s good to do this upfront, so you have as much data as possible to explore as you’re building your project. Having too little data might mean that you give up on your project too early.

In this case, there are several related datasets on the same website that cover demographic information and test scores.

Here are the links to all of the datasets we’ll be using:

  • SAT scores by school – SAT scores for each high school in New York City.
  • School attendance – attendance information on every school in NYC.
  • Math test results – math test results for every school in NYC.
  • Class size – class size information for each school in NYC.
  • AP test results – Advanced Placement exam results for each high school. Passing AP exams can get you college credit in the US.
  • Graduation outcomes – percentage of students who graduated, and other outcome information.
  • Demographics – demographic information for each school.
  • School survey – surveys of parents, teachers, and students at each school.
  • School district maps – contains information on the layout of the school districts, so that we can map them out.

All of these datasets are interrelated, and we’ll be able to combine them before we do any analysis.

Getting Background Information

Before diving into analyzing the data, it’s useful to research some background information. In this case, we know a few facts that will be useful:

  • New York City is divided into 5 boroughs, which are essentially distinct regions.
  • Schools in New York City are divided into several school district, each of which can contains dozens of schools.
  • Not all the schools in all of the datasets are high schools, so we’ll need to do some data cleaning.
  • Each school in New York City has a unique code called a DBN, or District Borough Number.
  • By aggregating data by district, we can use the district mapping data to plot district-by-district differences.

Understanding the Data

In order to really understand the context of the data, you’ll want to spend time exploring and reading about the data. In this case, each link above has a description of the data, along with the relevant columns. It looks like we have data on the SAT scores of high schoolers, along with other datasets that contain demographic and other information.

We can run some code to read in the data. We’ll be using Jupyter notebook to explore the data. The below code will:

  • Loop through each data file we downloaded.
  • Read the file into a Pandas DataFrame.
  • Put each DataFrame into a Python dictionary.

Once we’ve read the data in, we can use the head method on DataFrames to print the first 5 lines of each DataFrame:

We can start to see some useful patterns in the datasets:

  • Most of the datasets contain a DBN column
  • Some fields look interesting for mapping, particularly Location 1, which contains coordinates inside a larger string.
  • Some of the datasets appear to contain multiple rows for each school (repeated DBN values), which means we’ll have to do some preprocessing.

Unifying the Data

In order to work with the data more easily, we’ll need to unify all the individual datasets into a single one. This will enable us to quickly compare columns across datasets. In order to do this, we’ll first need to find a common column to unify them on. Looking at the output above, it appears that DBN might be that common column, as it appears in multiple datasets.

If we google “DBN New York City Schools”, we end up here, which explains that the DBN is a unique code for each school. When exploring datasets, particularly government ones, it’s often necessary to do some detective work to figure out what each column means, or even what each dataset is.

The problem now is that two of the datasets, class_size and hs_directory, don’t have a DBN field. In the hs_directory data, it’s just named dbn, so we can just rename the column, or copy it over into a new column called DBN. In the class_size data, we’ll need to try a different approach.

The DBN column looks like this:

If we look at the class_size data, here’s what we’d see in the first 5 rows:

CSD BOROUGH SCHOOL CODE SCHOOL NAME GRADE PROGRAM TYPE CORE SUBJECT (MS CORE and 9-12 ONLY) CORE COURSE (MS CORE and 9-12 ONLY) SERVICE CATEGORY(K-9* ONLY) NUMBER OF STUDENTS / SEATS FILLED NUMBER OF SECTIONS AVERAGE CLASS SIZE SIZE OF SMALLEST CLASS SIZE OF LARGEST CLASS DATA SOURCE SCHOOLWIDE PUPIL-TEACHER RATIO
0 1 M M015 P.S. 015 Roberto Clemente 0K GEN ED 19.0 1.0 19.0 19.0 19.0 ATS NaN
1 1 M M015 P.S. 015 Roberto Clemente 0K CTT 21.0 1.0 21.0 21.0 21.0 ATS NaN
2 1 M M015 P.S. 015 Roberto Clemente 01 GEN ED 17.0 1.0 17.0 17.0 17.0 ATS NaN
3 1 M M015 P.S. 015 Roberto Clemente 01 CTT 17.0 1.0 17.0 17.0 17.0 ATS NaN
4 1 M M015 P.S. 015 Roberto Clemente 02 GEN ED 15.0 1.0 15.0 15.0 15.0 ATS NaN

As you can see above, it looks like the DBN is actually a combination of CSDBOROUGH, and SCHOOL CODE. For those unfamiliar with New York City, it is composed of 5 boroughs. Each borough is an organizational unit, and is about the same size as a fairly large US City. DBN, as you remember, stands for District Borough Number. It looks like CSD is the District, BOROUGH is the borough, and when combined with the SCHOOL CODE, forms the DBN. There’s no systematized way to find insights like this in data, and it requires some exploration and playing around to figure out.

Now that we know how to construct the DBN, we can add it into the class_size and hs_directory datasets:

Adding in the Surveys

One of the most potentially interesting datasets to look at is the dataset on student, parent, and teacher surveys about the quality of schools. These surveys include information about the perceived safety of each school, academic standards, and more. Before we combine our datasets, let’s add in the survey data. In real-world data science projects, you’ll often come across interesting data when you’re midway through your analysis, and will want to incorporate it. Working with a flexible tool like Jupyter notebook will allow you to quickly add some additional code, and re-run your analysis.

In this case, we’ll add the survey data into our data dictionary, and then combine all the datasets afterwards. The survey data consists of 2 files, one for all schools, and one for school district 75. We’ll need to write some code to combine them. In the below code, we’ll:

  • Read in the surveys for all schools using the windows-1252 file encoding.
  • Read in the surveys for district 75 schools using the windows-1252 file encoding.
  • Add a flag that indicates which school district each dataset is for.
  • Combine the datasets into one using the concat method on DataFrames.

Once we have the surveys combined, there’s an additional complication. We want to minimize the number of columns in our combined dataset so we can easily compare columns and figure out correlations. Unfortunately, the survey data has many columns that aren’t very useful to us:

N_p N_s N_t aca_p_11 aca_s_11 aca_t_11 aca_tot_11 bn com_p_11 com_s_11 t_q8c_1 t_q8c_2 t_q8c_3 t_q8c_4 t_q9 t_q9_1 t_q9_2 t_q9_3 t_q9_4 t_q9_5
0 90.0 NaN 22.0 7.8 NaN 7.9 7.9 M015 7.6 NaN 29.0 67.0 5.0 0.0 NaN 5.0 14.0 52.0 24.0 5.0
1 161.0 NaN 34.0 7.8 NaN 9.1 8.4 M019 7.6 NaN 74.0 21.0 6.0 0.0 NaN 3.0 6.0 3.0 78.0 9.0
2 367.0 NaN 42.0 8.6 NaN 7.5 8.0 M020 8.3 NaN 33.0 35.0 20.0 13.0 NaN 3.0 5.0 16.0 70.0 5.0
3 151.0 145.0 29.0 8.5 7.4 7.8 7.9 M034 8.2 5.9 21.0 45.0 28.0 7.0 NaN 0.0 18.0 32.0 39.0 11.0
4 90.0 NaN 23.0 7.9 NaN 8.1 8.0 M063 7.9 NaN 59.0 36.0 5.0 0.0 NaN 10.0 5.0 10.0 60.0 15.0

We can resolve this issue by looking at the data dictionary file that we downloaded along with the survey data. The file tells us the important fields in the data:

building-f1

We can then remove any extraneous columns in survey:

Making sure you understand what each dataset contains, and what the relevant columns are can save you lots of time and effort later on.

Condensing Datasets

If we take a look at some of the datasets, including class_size, we’ll immediately see a problem:

CSD BOROUGH SCHOOL CODE SCHOOL NAME GRADE PROGRAM TYPE CORE SUBJECT (MS CORE and 9-12 ONLY) CORE COURSE (MS CORE and 9-12 ONLY) SERVICE CATEGORY(K-9* ONLY) NUMBER OF STUDENTS / SEATS FILLED NUMBER OF SECTIONS AVERAGE CLASS SIZE SIZE OF SMALLEST CLASS SIZE OF LARGEST CLASS DATA SOURCE SCHOOLWIDE PUPIL-TEACHER RATIO DBN
0 1 M M015 P.S. 015 Roberto Clemente 0K GEN ED 19.0 1.0 19.0 19.0 19.0 ATS NaN 01M015
1 1 M M015 P.S. 015 Roberto Clemente 0K CTT 21.0 1.0 21.0 21.0 21.0 ATS NaN 01M015
2 1 M M015 P.S. 015 Roberto Clemente 01 GEN ED 17.0 1.0 17.0 17.0 17.0 ATS NaN 01M015
3 1 M M015 P.S. 015 Roberto Clemente 01 CTT 17.0 1.0 17.0 17.0 17.0 ATS NaN 01M015
4 1 M M015 P.S. 015 Roberto Clemente 02 GEN ED 15.0 1.0 15.0 15.0 15.0 ATS NaN 01M015

There are several rows for each high school (as you can see by the repeated DBN and SCHOOL NAME fields). However, if we take a look at the sat_results dataset, it only has one row per high school:

DBN SCHOOL NAME Num of SAT Test Takers SAT Critical Reading Avg. Score SAT Math Avg. Score SAT Writing Avg. Score
0 01M292 HENRY STREET SCHOOL FOR INTERNATIONAL STUDIES 29 355 404 363
1 01M448 UNIVERSITY NEIGHBORHOOD HIGH SCHOOL 91 383 423 366
2 01M450 EAST SIDE COMMUNITY SCHOOL 70 377 402 370
3 01M458 FORSYTH SATELLITE ACADEMY 7 414 401 359
4 01M509 MARTA VALLE HIGH SCHOOL 44 390 433 384

In order to combine these datasets, we’ll need to find a way to condense datasets like class_size to the point where there’s only a single row per high school. If not, there won’t be a way to compare SAT scores to class size. We can accomplish this by first understanding the data better, then by doing some aggregation. With the class_size dataset, it looks like GRADE and PROGRAM TYPE have multiple values for each school. By restricting each field to a single value, we can filter most of the duplicate rows. In the below code, we:

  • Only select values from class_size where the GRADE field is 09-12.
  • Only select values from class_size where the PROGRAM TYPE field is GEN ED.
  • Group the class_size dataset by DBN, and take the average of each column. Essentially, we’ll find the averageclass_size values for each school.
  • Reset the index, so DBN is added back in as a column.

Condensing Other Datasets

Next, we’ll need to condense the demographics dataset. The data was collected for multiple years for the same schools, so there are duplicate rows for each school. We’ll only pick rows where the schoolyear field is the most recent available:

We’ll need to condense the math_test_results dataset. This dataset is segmented by Grade and by Year. We can select only a single grade from a single year:

Finally, graduation needs to be condensed:

Data cleaning and exploration is critical before working on the meat of the project. Having a good, consistent dataset will help you do your analysis more quickly.

Computing Variables

Computing variables can help speed up our analysis by enabling us to make comparisons more quickly, and enable us to make comparisons that we otherwise wouldn’t be able to do. The first thing we can do is compute a total SAT score from the individual columns SAT Math Avg. ScoreSAT Critical Reading Avg. Score, and SAT Writing Avg. Score. In the below code, we:

  • Convert each of the SAT score columns from a string to a number.
  • Add together all of the columns to get the sat_score column, which is the total SAT score.

Next, we’ll need to parse out the coordinate locations of each school, so we can make maps. This will enable us to plot the location of each school. In the below code, we:

  • Parse latitude and longitude columns from the Location 1 column.
  • Convert lat and lon to be numeric.

Now, we can print out each dataset to see what we have:

Combining the Datasets

Now that we’ve done all the preliminaries, we can combine the datasets together using the DBN column. At the end, we’ll have a dataset with hundreds of columns, from each of the original datasets. When we join them, it’s important to note that some of the datasets are missing high schools that exist in the sat_results dataset. To resolve this, we’ll need to merge the datasets that have missing rows using the outer join strategy, so we don’t lose data. In real-world data analysis, it’s common to have data be missing. Being able to demonstrate the ability to reason about and handle missing data is an important part of building a portfolio.

You can read about different types of joins here.

In the below code, we’ll:

  • Loop through each of the items in the data dictionary.
  • Print the number of non-unique DBNs in the item.
  • Decide on a join strategy – inner or outer.
  • Join the item to the DataFrame full using the column DBN.

Adding in Values

Now that we have our full DataFrame, we have almost all the information we’ll need to do our analysis. There are a few missing pieces, though. We may want to correlate the Advanced Placement exam results with SAT scores, but we’ll need to first convert those columns to numbers, then fill in any missing values:

Then, we’ll need to calculate a school_dist column that indicates the school district of the school. This will enable us to match up school districts and plot out district-level statistics using the district maps we downloaded earlier:

Finally, we’ll need to fill in any missing values in full with the mean of the column, so we can compute correlations:

Computing Correlations

A good way to explore a dataset and see what columns are related to the one you care about is to compute correlations. This will tell you which columns are closely related to the column you’re interested in. We can do this via the corr method on Pandas DataFrames. The closer to 0 the correlation, the weaker the connection. The closer to 1, the stronger the positive correlation, and the closer to -1, the stronger the negative correlation:

This gives us quite a few insights that we’ll need to explore:

  • Total enrollment correlates strongly with sat_score, which is surprising, because you’d think smaller schools, which focused more on the student, would have higher scores.
  • The percentage of females at a school (female_per) correlates positively with SAT score, whereas the percentage of males (male_per) correlates negatively.
  • None of the survey responses correlate highly with SAT scores.
  • There is a significant racial inequality in SAT scores (white_per, asian_perblack_per, hispanic_per).
  • ell_percent correlates strongly negatively with SAT scores.

Each of these items is a potential angle to explore and tell a story about using the data.

In Part 2, we’ll cover data exploration.

Vik Paruchuri is the founder of Dataquest, a platform that teaches data science interactively in your browser. Dataquest’s unique approach to learning blends theory and practice, then helps you build your portfolio with projects.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

One response on “Building a Data Science Portfolio: Storytelling with Data

  1. Eduardo

    Thank you for sharing, I really like how you expose all the process. I’m really looking forward for the second part.
    Super cool post!