Early this month I attended the 2013 IA Summit in Baltimore, MD. Then I came home and promptly became ill (apologies to anyone who I might have infected, though they *say* sinus infections aren’t contagious). It wasn’t a send-me-to-the-hospital type of illness, but it was a drain on my ability to think. I’m glad to have my brain back: hello brain, nice to see you!
I went over my notes today to see what stood out to me and my most thorough notes were for Brad Nunnally‘s presentation Users in the Mist: Stories from Field Studies. Brad described user studies he did for Fidelity at people’s own homes, watching them use products in their natural habitat.
Some key pieces of advice in the talk:
- Send your information ahead with a photo attached so people will recognize you when you appear on their doorstep.
- Take the water offered (observe hospitality customs)
- Watch out for guns (and your own personal safety)
Field studies is a type of user research that I would love to do more of in the future. When the MTA installed their new Metrocard machines I had a class assignment to observe people using them at the stations. It was much less nerve-wracking than I feared: as long as I had my pen and notebook and explained what I was doing, people were very accepting. They were so accepting that they started asking me for assistance!
When I was little, I drew constantly on everything. Scrap paper, inside the covers of coloring books, notebooks, you name it. At some point my level of drawing activity dwindled to doodling in the margins of class notes and there it stayed for a while.
What really got me drawing again was taking a Graphic Design at the School of Visual Arts. One of the exercises we learned was creating thumbnails; tiny sketches showing multiple layouts side by side for a project. I found it difficult at first, because out of all of the layouts produced, some were bound to be bad, some mediocre, and hopefully a few could be developed further. My perfectionist tendencies were really challenged. At that level though, there was no need to be a perfectionist since the stakes were so low.
Since then, I’ve read a lot of articles about sketching for User Experience and have been doing it more and more. My current favorite notebook is by Muji and has dot grid paper (I hate ruled paper for some reason). I also like using printed storyboard worksheets, especially the Storyboard with Notes paper on Konigi.com.
A few months ago, I got an assignment to make wireframes for a online contest application that had previously been done as a print campaign. The client had made some assumptions about how the application should be done that could cause some real problems for users. I did a cartoon-type storyboard to show the account manager how the process would play out following those assumptions with possible “pain points” at each step. Though not all of my recommendations were followed, I was able to communicate places to make changes using the storyboard. A sample panel is below:
So I won’t be creating the next great graphic novel, but it was a useful tool anyway. Plus, well, it was fun to do!
While taking Music Technology classes at NYU back in the late ’90s, I became curious about the use of audio in the computer user interface. Why wasn’t it used for more than alert beeps? At the time, the GUI (Graphical User Interface) was the big thing in computing, but I remembered my suite mate in college and wondered how blind people could use a GUI.
I found some research by Xerox PARC on audio interfaces. My slightly fuzzy memory is that they came up with the concept of rooms – audio rooms – for navigating a computer interface spatially. I also found the book Audio User Interfaces by T V Raman, a blind computer scientist who had created his own auditory user interface using Emacs. The description of his interface sounded nothing like the theoretical ones I had been reading about. Raman needed to be able to read technical documents including complex mathematical equations, and screen reading software couldn’t handle them. So first he wrote a program called AsTeR and then, when he wanted to be able to use the same functions for email, web surfing and other computer tasks, he created Emacspeak. With Emacspeak, audio cues and formatting gave context to words he heard read at a much higher speed than a sighted person would be comfortable processing. I realized that there was a important difference between designing for a hypothetical blind person (a blind version of me?) and designing for a real live person who navigated the world without sight.
My reading (and acronym) list grew: HCI (Human Computer Interaction), Alan Cooper, Computers as Theatre, SIGCHI (Special Interest Group for Computer-Human Interaction), Don Norman‘s Design of Everyday Things, UX, User and Task Analysis, so many ideas.
Then I went to work coding webpages and found that most of the people around me were unaware of most of these ideas and how they could make websites better. User Experience wasn’t the buzzword it is today and many agencies were just starting to create positions for Information Architects. After the dot.com crash, I was able to use my knowledge to improve processes at the Red Cross September 11 Program, but it would be several more years before I would move back to working on the front-end/design side of the Internet.