8B. Interface Theory and Practice

(JDrucker 9/2013)

An interface is a set of cognitive cues, it is not a set of pictures of things inside the computer or access to computation in a direct way. Interface, by definition, is an in-between space, a space of communication and exchange, a place where two worlds, entities, systems meet. Because interface is so familiar to us, we forget that the way it functions is built on metaphors. Take the basic metaphors of “windows” and “desktop” and think about their implications. One suggests transparency, a “looking through” the screen to the “contents” of the computer. The other suggests a workspace, an environment that replicates the analogue world of tasks. But of course, interfaces have many other functions as well that fit neither metaphor, such as entertainment, viewing, painting and designing, playing games, exploring virtual worlds, and editing film and/or music.

Interface conventions have solidified very quickly. As with all conventions, these hide assumptions within their format and structure and make it hard to defamiliarize the ways our thinking is constrained by the interfaces we use. When Doug Engelbart was first working on the design of the mouse, he was also considering foot pedals, helmets, and other embodied aspects of experience as potential elements of the interface design. Why didn’t these catch on? Or will they? Google Glass is a new innovation in interface, as are various augmented reality applications for handheld devices. What happens to interface when it moves off the screen and becomes a layer of perceived reality? How will digital interfaces differ from those of the analogue world, such as dashboards and control panels?

Exercise: What are the major milestones in the development of interface design? Examine the flight simulators, the switch panels on mainframe computers, the punchcards and early keyboards. What features are preserved and extended and which have become obsolete? These are merely the physical/tactile features of the interface.

Compare the approach here: http://en.wikipedia.org/wiki/History_of_the_graphical_user_interface

with the approach here: http://www.catb.org/esr/writings/taouu/html/ch02.html

In the second case, the division of one period of interface from another has to do with machine functions as well as user experience. How else do interfaces get organized and distinguished from each other?

Exercise: What are the basic features of a browser interface? How do these relate to those of a desktop environment? What essential connections and continuities exist to link these spaces?

To reiterate, an interface is NOT a picture of what is “inside” the computer. Nor is it an image of the way the computer works or processes information or data. In fact, it is a screen and surface that often makes such processing invisible, difficult to find or understand. It is an obfuscating environment as much as it is a facilitating one. Can you think of examples of the way this assertion holds true? As the GUI developed, the challenge of making icons to provide cognitive cues on which to perform actions that create responses within the information architecture became clear. If you were posed the challenge of creating a set of icons for a software project in a specialized domain, what would these be and what would they embody? The idea that images of objects allow us to perform activities in the digital environment that mimic those in the analogue environment requires engineering and imagination. Onscreen, we “empty” a trashcan by clicking on it, an action that would have no effect in the analogue world, though we follow this logic without difficulty by extending what we have been trained to do in the computer. Dragging and dropping are standard moves in an interface, but not really in an analogue world. If we pursue this line of reasoning, we find that in fact the relation between the interface and the physical world is not one of alignment, but of shifted expectations that train us to behave according to protocols that are relatively efficient, cognitively as well as computationally.

Exercise: The infamous failure of “Bob” the Windows character, and the living-room interface, provides a useful study in how too literal an imitation of physical world actions and environments does not work in certain digital environments—while first person games are arguments on the other side of this observation. Why?

Exercise: Matthew Kirschenbaum makes the point that the interface is not a computational engine BUT a space of representation. Stephen Johnson, the science writer, was quoted in the following paragraph. Use his observations and and discuss NY Times front page and Google Search engine:

“By “information-space,” Johnson means the abrupt transformation of the screen from a simple and subordinate output device to a bounded representational system possessed of its own ontological integrity and legitimacy, a transformation that depends partly on the heightened visual acuity a graphical interface demands, but ultimately on the combined concepts of interactivity and direct manipulation.”

From the point of view of digital humanities projects, one of the challenges is neatly summarized in the graphic put together by Jesse James Garrett titled “Elements of the User Experience.” Garrett’s argument is that one may use an interface to show the design of knowledge/information in a project or site, or to organize the user experience around a set of actions to be taken with or on the site, but not both. So when you start thinking about your own projects, and the elaborate organization that is involved in their structure and design from the point of view of modelling intellectual content, you know that the investment you have made in that structure is something you want to show in the interface (e.g. The information and files in your history of African Americans in baseball project is organized by players, teams, periods, legal landmarks.) But when you want to offer a user a way into the materials, you have decide if you are giving them a list and an index, or a way to search, browse, view, read, listen etc. The first approach shows the knowledge model. The second models user experience. We tend to combine the two, mixing information and activities. See https://wiki.bath.ac.uk/display/webservices/Shearing+layers

Exercise: Analyze Garrett’s diagram, then relate to examples across a number of digital humanities projects such as PerseusWhitmanOrbisOld BaileyMapping the Republic of LettersAnimal CityCodex SinaiticusDigital KarnakDigital Roman ForumCivil War Washington, and theEncyclopedia of Chicago.

Exercise: Ben Shneiderman is one of the major figures in the history of interface and information design. He has Eight Golden Rules of interface design.

  • What are the rules? What assumptions do they embody?
  • For what kind of information does work or not work?


An interface can be a model of intellectual contents or a set of instructions for use. Interface is always an argument, and combines presentation (form/format), representation (contents), navigation (wayfinding), orientation (location/breadcrumbs), and connections to the network (links and social media).

Interfaces are often built on metaphors of windows or desktops, but they also contain assumptions about users. The difference between a consumer and a participant is modeled in the interface design.

Required reading for 9A

Study questions for 9A

  1. How is Omeka and/or WordPress set up to address issues of Accessibility? What modifications to your project design would you make based on the recommendations in Burghstahler or Gnome’s presentations of fundamental considerations?
  2. How are cross-cultural issues accounted for in your designs?
  3. What is the “narrative” aspect of an interface? Where is it embedded in the design?

Copyright © 2014 - All Rights Reserved