Last weekend I shook off my Berkeley inertia and took a trip in to the city to attend WordCamp.
I particularly enjoyed a talk by Rashmi Sinha about social networks and popularity. There are some problems inherent in basing a site’s navigation on popularity. A lot of sites like Flickr, Digg, etc. emphasize browsing based on “most viewed”, “most downloaded”, “most popular” tags, etc. The undiscovered posts/images/constributions (the Long Tail) cannot rise to the top in this structure, and ultimately become less findable. The hierarchy reinforces itself. Early adopters of a social network become overly dominant, and their popularity is difficult to dismantle.
Rashmi presented a few ways to override this self-reinforcing popularity mechanism. On her project SlideShare, they set up other navigation panels such as “most recently added”, and they restrict the popularity measures to a specific period of time (“most viewed in the past week”.)
Her presentation from the talk is here (on SlideShare, of course).
Bret Victor makes some salient points in this piece — in essence, rather than desiging good interactive experiences, we should present information in such as way that the user doesn’t have to interact with it to find things. He proposes redesigns of Amazon and Yahoo! Movies that are very information-dense but also quite useable (and not “interactive”).
The ubiquity of frustrating, unhelpful software interfaces has motivated decades of research into “Human-Computer Interaction.” In this paper, I suggest that the long-standing focus on “interaction” may be misguided. For a majority subset of software, called “information software,” I argue that interactivity is actually a curse for users and a crutch for designers, and users’ goals can be better satisfied through other means.
Information software design can be seen as the design of context-sensitive information graphics. I demonstrate the crucial role of information graphic design, and present three approaches to context-sensitivity, of which interactivity is the last resort. After discussing the cultural changes necessary for these design ideas to take root, I address their implementation. I outline a tool which may allow designers to create data-dependent graphics with no engineering assistance, and also outline a platform which may allow an unprecedented level of implicit context-sharing between independent programs. I conclude by asserting that the principles of information software design will become critical as technology improves.
Although this paper presents a number of concrete design and engineering ideas, the larger intent is to introduce a “unified theory” of information software design, and provide inspiration and direction for progressive designers who suspect that the world of software isn’t as flat as they’ve been told.
I find personas virtually useless when it comes to design, and I very rarely reference them in making design decisions. For me, personas aren’t about design, but that doesn’t mean they’re not incredibly powerful in other ways.
Rather than using them to drive design, she advocates using personas to communicate the user-centered design process to key stakeholders:
Having your clients view user research and testing is incredibly powerful in helping them realise that there is a problem in the way they’ve been approaching things to date (if you’re not encouraging stakeholders to actively participate in observing research and testing you’re missing out on a lot). But to get them to actually understand what user centred design is about – you need personas. … Personas should always be developed collaboratively with key stakeholders – as many as possible.
Personas can be useful in determining the edge cases, but guiding design around that is dangerous:
Personas should define the boundaries for which you will design. It’s a common misconception that personas are about creating a set of ‘typical’ or ’stereotypical’ users. Much more useful is to use personas which incorporate edge cases behaviour or requirements.
…Creating ‘edge case inclusive’ personas and then prioritising personas and their goals is much more useful in helping decide what functionality goes in and what doesn’t.
… If you use the personas to closely guide your design you will end up supporting a series of edge cases. This will invariably mean that your CORE functionality is compromised. That’s bad design.
Personas help to remind us that there is more than one “user” in “user-centered design”. Instead of saying “the user wants to …” we can use personas to explain that “Susan wants to do X, whereas Scott wants to do Y.”
When the Russian chemist Dmitri Mendeleev published the first version of his Periodic Table of the Elements in 1869 he couldn’t imagine that it would become in due time one of the most outstanding information visualisations and that many fields would use it more than one century later as a visual metaphor.
Despite the good work in classifying more than a hundred different visualisation methods, using the scheme of the periodic table and the exact shape of the same for displaying the methods is more than disputable since the paradigm the periodic table adheres to (atomic number, chemical properties, orbitals, etc) has no parallelism to the case of visualisation methods, which invalidates the visual metaphor it intends to be. Stephen Few discusses this point very cleverly in his blog Visual Business Intelligence. Hence I will not abound on this here.
The fact is that mimicking existing paradigms just because they provide a familiar lay-out doesn’t add any insight into what we are looking for, that is regularities in the methods of visualisation. Trying to map the regularities of the chemical elements into those of desserts, or visualisations, is misleading since it hampers finding true regularities and although it covers the transmission of knowledge it doesn’t contribute to pattern detection and even less to knowledge discovery, outstanding outcomes of Mendeleev work.
Building a taxonomy of visualisation methods is not a simple issue and having an equivalent of the in depth work done by Mendeleev for chemistry in Information Visualisation would be a major advance, that, in my opinion we should pursue by finding the main features of each method, building a new paradigm and representing them in original and meaningful ways in accordance with said paradigm.
Jacob Nielsen writes in his latest column that eBay’s recent earnings increased because of better usability.
eBay reported record profits for Q1. This despite the fact that the number of auction LISTINGS only increased by 2%. However, merchandise SOLD increased by 14%.
In presenting the record numbers, the CEO, Meg Whitman, said that “the company had been benefiting from changes in the user experience that had increased the number of auctions leading to sales” (as quoted in The New York Times, April 19).
This is a great example of the benefits of usability for e-commerce: income comes from multiplying the amount of use with the conversion rate. The more you improve the user experience of finding products, researching them, and buying them, the higher your conversion rate.
Advertising is important, but it only brings customers to the site. A good user experience is what convinces them to stay there. Careful usability testing and a minimum of user annoyances is at least equally important. Usability affects whether the customer makes a purchase or clicks off to another site.
In eBay’s case, user experience is so important for profits that it’s one of the main things the CEO mentions to the financial press in presenting the quarterly results. eBay has a particularly competent user experience department, but smaller companies usually find that a smaller usability effort can increase their financial performance materially. Your first usability test will uncover a gold mine of low-hanging fruit, to mix metaphors.
Good design isn’t just pretty, it’s also good for business.
First, the enthusiasts: following is a video panel discussion among representatives of Second Life, MySpace, FaceBook and LinkedIn (a Commonwealth Club event, via ForaTV). According to them the value of online social networks is immeasurable. No-one blinked when Robin Harper from SecondLife admitted that their power-users average 84 hours a week online, and have absolutely no “First Life”.
And now, the detractors. Since the Kathy Sierra scandal, the Web 2.0 community has started talking about its dark side, albeit hesitantly. Is anonymity really that great? Are we abusing the democratic promise of the Internet by engaging in cowardly flaming wars under pseudonyms? Can you really trust information that is unedited and unattributed? Are people spending too much line creating online personas and online friendships, thereby forgetting how to connect as a real person, face to face?
The infinite desire for personal attention is driving the hottest new part of the Internet economy–social networking sites like MySpace, Facebook, MTV Flux, and Bebo. As shrines for the cult of self broadcasting, these sites have become tabula rasas of our individual desires and identities. They claim to be all about “social networking” with others, but in reality they exist so that we can advertise ourselves: everything from our favorite books and movies, to photos from our summer vacations, to “testimonials” praising our more winsome qualities or recapping our latest drunken exploits.
It’s important to remember: just because you have 524 online friends (or colleagues) doesn’t mean they really know you or can vouch for you (or would show up with jumper cables if your car stalled at midnight in the rain).
I got a sneak peek at the Web 2.0 Expo this week by signing up for the free Expo-only pass. It’s fun to see a web event in person, since so much of what’s going on in web culture happens with one person sitting in front of a computer screen. In fact, it was a nice relief to just sit and listen to people talk without looking at a computer screen for a whole day! In part this was forced on me by the very low-fi wireless connection, but it’s still good training to just listen instead of popping open a new Firefox tab every three minutes.
The expo-only ID — the yellow badge announcing that “I didn’t pay to be here” — only allowed access to the “Products and Services” track, which was basically a bunch of pitches that were not interesting to me. So, I opted for the “un-conference” going on in the hallways outside the conference rooms — the Web 2.Open. These sessions were quite interesting and much more intimate than the larger audience-of-hundreds format. In particular I enjoyed Nicole Simon‘s talk on a European perspective on Web 2.0, and a roundtable discussion on usability issues led by Chris Cole of Human Factors.
As far as the rest of the conference goes, I heard from other attendees that some of it was quite good. If anyone is interested in what was presented, LukeW has posted a very generous collection of notes from some of the sessions.