Big Data Design Served With Actionable Feedback
TrendPo believes that the future of politics is in big data, so they want to provide politicians the tools to monitor and analyze their social impact through social media, the news, and public sentiment. TrendPo’s first offering was a simple 24-hour report showing the change in the user’s (the politician) social media and news "buzz". The report would compare the individual’s stats to a calculated benchmark to help compare themselves to their peers.
Discovery and Definition
We held a kick-off meeting to define the scope of this specific project and to discuss the content we wanted to include in the report. Why would the user want certain pieces, and not others? Which were most useful? Which could we represent in a meaningful way? How could we package the data to weed out signal from the noise?
To answer these questions, we needed to dig into the data to find out exactly what statistics TrendPo could generate. We worked through the statistics, which included: the change in Twitter followers; the most liked Facebook posts; and the most viewed article that the politician was mentioned in. We took notes on our trusty whiteboard, linking groups of statistics with the defined objectives.
We would discuss the purpose of showing each bit of data. Agreement was made that if a statistic met a predetermined threshold, we would provide the politician an actionable item to help them react; for example, if a Facebook post was liked “x” amount of times, we would suggest they purchase a Facebook ad to promote the specific topic mentioned in the post.
There was an incredible amount of data, and we only had a small window (email) to leave an impression. We decided to generate visual aids / graphs to further elaborate the data. It took a bit of iteration to find the sweet spot of raw data, charts, and visual aids; but I’ll get to that in a moment.
Research and Data-collection
After the kick-off meeting, I held individual meetings with key stakeholders to discuss the importance of each piece of data and help categorize the content. It was important to see if any underlying patterns could be pulled to the top and highlighted within the design. Two pieces became apparent: the benchmark and the daily change rate statistics for each platform: Facebook, Twitter, News, and YouTube.
Design and Feedback
With the content decided, I began sketching concepts to tackle the visual hierarchy and overall layout. The team had decided to keep all of the content confined to a single 8x11.5” print-out page to make it easier for the user to print out (they love their hard copies.)
Sketching really helped me figure out how to show each piece of content. I tried multiple designs, testing them against other people to see if I could get the content’s main point across. After playing around for a bit, I started to find rhythm in how they should piece together.
I then hopped into the Sketch app to visualize the wireframes digitally. It’s always better to design with real content (it brings a sense of clarity to the design), so I pulled in raw data and continued refining the design.
We went through this process over and over again; after the 5th iteration we all agreed that the design met every objective. It could now be implemented and tested with current clients.
Once the engineering team built-out our test, we found edge cases where the design needed tweaking. I worked quickly with the engineers to solve these issues, and we were back on track for testing.
Test and Iterate
We performed testing with a small group of volunteers from our customers to get their feedback on the usefulness of the report. The feedback was positive and provided further insight to the statistics our customers wanted to see. Using this data, we’ll be going back to revise the design in the near future.