Data Visualization



Product Design, Data Visualization

A collection of data visualization projects for various clients.




Oil & Gas
Dashboard


Enigma            2016
The mocks below are prototypes for an Oil & Gas Dashboard. The tool allows users to understand and transform raw data without knowing sophisticated analysis techniques, usually performed in third-party applications.

Using the dashboard, users can explore selected oil and gas data, gain high-level insights, visualize trends, and create data alerts. The dashboard highlights trends in the industry and the performance of individual companies.



Enigma outlined four different user scenarios for this project. Based on the storyboard and the available data, we decided to create two separate views: the dataset and the map.



Video
Dashboard


SimpleReach    2016
People consume video content across multiple platforms (Facebook, Instagram, YouTube, etc), which means measuring performance of a single video or campaign is fairly manual and fragmented. SimpleReach wanted to create a dashboard for video metrics that consolidated metrics across different platforms for their users.

This was a mock prototype that incorporated the video platform’s key metrics and trends over time.






The Meanest
Pitchforker


Kevin Munger    2015
Pitchfork.com is a popular online publisher of independent music reviews. This was a collaboration with Kevin Munger, Assistant Professor of Political Science. The graphic illustrates the relative harshness (based on a score 0-10) of various Pitchfork reviewers.

“I started by using Kimono to crawl the 500 most recent pages of the Pitchfork Reviews website for the name and artist of the 10000 most recent albums. I then modified an API I found to allow me to use Python to scrape the text, score and author the these 10000 reviews.

I used those data to train (in R) a machine learning algorithm called Elastic Net to learn what words were associated with more positive reviews, and which words were associated with more negative reviews. In general, the algorithm performed fairly well, but it couldn't have been perfect. First, each person has an idiosyncratic writing style, so there's going to be some error introduced by the way that different people use the same word differently. If that were the only problem, the errors would be randomly distributed across each reviewer.”

“Crucially, though, some reviewers are harsher than others--given than an album gets a score of 6.9, someone might write a review that is predicted to be a 6.5, and someone else might write a review that is predicted to be a 7.5. And if a particular author tends to be especially harsh, averaged over their entire critical ovary, we can detect that trend.”


Previous

(Member Referral Campaign)

Next

(Stargate Maps)