Discover more from Metric by Michael Schofield
The "Sensemakers" Team
Michael here. A lot of you stuck around after I announced last month that I am going to be taking my newsletter into a more personal, less library-specific direction. So, I'm super happy to see you all.
In October, my team of superfriends and I started a new long-term library design project with an exaggerated discovery phase intended not just to collect a bunch of new user data, but teach the library to run with the baton when everything's said and done. User experience design doesn't stop, right?
User-centric organizations require systems that constantly ask for, collect, and make sense of feedback. "System" comes off a little formal but we might be talking about a suggestion box at the reference desk and what we do with those suggestions. It is a system if the process is ongoing and -- except maybe at the end of the fiscal year -- unencumbered by reasons for performing that research.
We have to start somewhere, but we should aspire to get to a point where we're not performing user research around the lifecycle of a project, but that we just do research.
If not for a specific project, though, what the heck do we do with that?
This week, I met for the first time with this library's brand-spanking-new "sensemaking team," a group of about six who are positioned at the tail-end of this data-collection lifecycle where the raw feedback -- from Facebook comments to user interview transcripts, survey data, audio logs, eyetracking studies, or videos of scenario tests -- is transformed into the atomic unit of a research insight: the nugget.
We adapted this nugget system originally developed by WeWork as the tool for logging and handling user research -- it's the third time I've used this system in a project and I'm a crAaaay evangelist (!) -- but it's the creation of the team itself and their role that I think will have the greatest impact by forcing questions that ultimately optimize the existing data-collection system in the first place.
How do we solicit feedback? Which biases are baked-in to how we ask? Do we do anything with user comments made in passing (e.g., "your bathrooms are really vintage and, if it weren't weird, I think I'd hang out there")? Where does the feedback go? How do we follow-up? Is our process transparent?
-- and so on.
Anyway, I just wanted to reach out and tell you a little bit about what's on my mind. It's not ready for primetime but I shared slides that introduce this sensemaking system in the #laboratory in my LibUX Slack. If slack is your thing, it would be fun to have you there.
I haven't recorded an episode of Metric recently, but my other comedy web design news podcast (that's right!) -- W3 Radio -- is creeping-up on 10 episodes. This week's is about how the U.S.S. John McCain crashed because of a UI gaffe, but I really like last week's Halloween Spooktacular episode. We're silly.
Lastly, if anything up there turned your gears, I'd love to hear your thoughts. If you have Q's, I may have A's.