One of the craziest things happened this past weekend and it couldn’t have been better timed (I love how that happens)! I’ll share that in a bit, but first, let me share a few thoughts about the basis upon which this important event was ever-so fortuitous…
As you know, we’ve been obsessing over the creation of a developer-centric newsfeed and we’ve been iterating on it for the past few months, which you can see through the archive of posts on this blog.
One of the things that we are trying to solve is the incredible amount of data that a technical person encounters and has to make decisions around.
This obviously includes technical data related to their direct line of work (e.g. codebase, pull-requests, bugs, issues, comments, etc.) but it also includes conversations around the product, marketing, the business as a whole, and global messaging from leadership.
And all the above is just related to their current place of employment; let’s not even start with anything external to the organization (blogs, Twitter, news, Facebook, Hacker News, email newsletters, and more)!
As you already know, we are literally drowning in a sea of data and the perennial challenge is how we can filter out the noise and just focus on the signal.
To do this we build our own systems, processes, and mechanisms. We install and experiment with new tools, applications, and technology. We even “roll our own” and create local pieces of software to manage the unmanageable and we are in a constant battle between memory usage and browser tabs!
Clearly, this is a problem that needs solving as our very work depends on our ability to focus, concentrate, encounter and engage the right pieces of data and to filter out all the rest that provides little, tactical, and immediate value.
A technical newsfeed, if done well, will enable us to do just that. But, it has to serve the user through intelligent organization so as not to ultimately become just-another-tool-to-check, am I right?
Intelligent grouping is where we will start and Jeff has already built a lot of this into our current data model. And I was trying to come up with a great example of this when, as luck will have it, I encountered this via Twitter this past weekend.
The context is not important and I’ll skip the much longer story, but, essentially I tweeted at a YouTube celebrity (which is very unlike me) and I found myself immediately drowning in data.
More specifically, my personal Twitter feed became completely unmanageable as the only thing that I could see and read on my feed for the first 24 hours were reactions to my tweet:
The reactions to my tweet were so fast and so copious that I could no longer read anyone else’s tweets or content. I would have to scroll for days to get something other than the reactions to the tweet.
But… what Twitter has so wisely done is combine similar events into a single “card” update with the intention of simplifying the stream for the end-user. This intelligent grouping literally saved my stream before I could #RAGEQUIT.
The result of their feed intelligence can be seen here:
You can see that they grouped the “hearts” into one card after certain periods of times and I could being to see other reactions and updates from other users (like the one just below the grouping!).
Whew. Crisis averted. Feed saved. User happy and, guess what? I won’t quit the service.
Obviously super-positive feelings for me (the user) and Twitter gets to keep a fairly active user happy and engaged and still a member of the site and service! These are all good things that every product designer, product developer, and software team wants their users to feel and experience.
Now, to be sure, the tweet is still getting slammed and it’s still growing by the hour, but Twitter’s intelligence has caught up and my feed is manageable and usable. Twitter has done a fine job of grouping interactions while providing the signal data that I want to see without destroying the very thing that made the product magical.
This is what we’re building into our technical newsfeed for technically-minded folks and teams. This intelligence is what we’re building in our product so that real engineering operations can scale.
Twitter (and other such networks and tools) have set the bar high for interaction and user-engagement. But, it’s not perfect and we hope to iterate on what we know to be true and refine our approach to signal data as we get more live-fire feedback from our early users.
But the point is this: We must consistently serve our users through intelligent organization so that they can not only use the product well, but love it too. Anything short of this is not worth their time (and ours).
Also published on Medium.