We had a day-long discussion of the research factors surrounding distributed display environments. We also talked about what we need to do collectively to push the field forward and into the attention of HCI researchers.
Next Steps
The following steps were suggested as possible future steps to take in advancing this field of research about distributed display environments.
- Develop benchmark tasks - If we develop as a group a set of tasks that can be used in the evaluation of distributed display environments and interfaces developed for them, then we might be able to more easily compare findings.
- Build common terminology - If researchers in this area use a common set of terms for different aspects of distributed display environments, we can make our findings more accessible to those not familiar with the area but who take an interest. This step is complementary to the step above as it will further help make findings among different systems more comparable.
- Special Journal Issue or Paper - By publishing a paper about our work and our workshop or by developing a special isue of a journal devoted to the workshop topic, we might be able to more easily make others aware of the important research being conducted in this area.
- Making tools and interfaces available - There are two aspects to this step. The first is to encourage people to build toolkits that can help expedite the process of developing interfaces and applications for DDEs. The other is to make easily available the developed interfaces themselves so that others can iterate and expand on the ideas more quickly.
- Build a comprehensive reference list - Certainly the people involved in this workshop are not the only ones conducting research in this area. Furthermore findings from disciplines other than computer science may be very beneficial to system and interface development. A solid reference list thus acts as a valuable community resource.
Workshop Participants and Papers
We have listed here (in alphabetical order by first author) the papers involved in the workshop. Authors who were unable to attend the workshop are italicized. Workshop organizers were Duke Hutchings, John Stasko, and Mary Czerwinski.[ pdf ] Mark Ashdown, Yoichi Sato. Attentive Interfaces for Multiple Monitors.
[ pdf ] Brian P. Bailey. A Distributed Display System for Interactive Sketching.
[ pdf ] Blaine Bell, Steven Feiner. View Management for Distributed Display Environments.
[ pdf ] Jacob T. Biehl, Brian P. Bailey. Interfaces for Managing Information in Distributed Display Environments.
[ pdf ] Kori M. Inkpen, Regan L. Mandryk. Multi-Display Environments for Co-located Collaboration.
[ pdf ] Gerd Kortuem, Christian Kray. HCI issues of dispersed public displays.
[ pdf ] Benoit Mansoux, Laurence Nigay. Distributed Display Environments in Computer-Assisted Surgery systems.
[ pdf ] Chia Shen, Kathy Ryall, Katherine Everitt. Facets of Distributed Display Environments.
[ pdf ] Wolfgang Stuerzlinger. MULTI: Multi-User Laser Table Interface.
Research Factors
During the workshop we discusssed a very large number of factors that affect research in the area of distributed display environments. They are listed below. They are somewhat organized into higher-level categories but the discussion was not necessarily this organized. Requests for changes (additions, deletions, expansions, etc.) should be made to Duke Hutchings: hutch@cc.gatech.edu.Physcial & logical set-up
To what degree are the following display types used?
- Virtually Contiguous or Separated?
- Focus & context?
- Alternate presentations
- Duplication - i.e. multiple displays show the same information
- Augmentation (AR) and point-of-view
Spatial Arrangement
- Proximity
- Angle
- # of Displays
- Orientation
- Spatial contiguous
- Focus v. periphery
- Miniature views of other displays for personal use
- Alternative semantic views of other displays for personal use
- Replication and duplication
- Independence of displays
- Observability of the relationship of the displays (physical and logical)
- Distance of displays from one another
- Dynamicity - how easy is it to change physical or logical setup?
Input
- How well can we leverage existing taxonomies? That stated...
- Laser
- TUIs (how much interaction is embedded in the device)
- Direct
- Remote
- Number
- Absolute v. relative
- Attentional direction required
- 1 v. 2-handed
- Multimodality
- Visibility of current focus/cursor
Other considerations
- Presence and degree of occlusion by multiple users
- Are displays situated in public spaces or in private/owned space
- Display sizes
- Setting
- environmental constraints - kitchen v. office v. surgery
- role - primary v. secondary... how does paper fit in?
- Are the displays input and out put devices? do they allow input indirectly? or do they only show information?
Aspects of Collaboration
- Familiarity with each other
- Cohesion
- Roles
- Shared goals
- Co-location (physical, temporal, mobile or not)
- Simultaneous use v. sequential tasks
- Point of view w.r.t. others in group
- Each individual's mental model
Aspects of evaluation
High-level questions and observations
- To what degree does event logging reflect the nature of interaction in DDEs?
- How well can low-fidelity prototypes be leveraged to inform the design of higher-fidelity prototypes?
- Is there a high level of user variability in experience in using DDEs and will this have a harmful effect on interpreting experimental results?
- It is crucial to differentiate between "pixel space" (the number of pixels available for interaction) and "physical space" (the physical extent of the DDE) when talking about "large displays."
Evaluation Metrics
- Completion time
- Error rates
- Satisfaction and Perception
- Learnability
- Retention
- "Quality" of work other than error rate
- Headturns & footsteps (is more moves necessarily bad?)
- Degree of mobility
- Does the interface or system expand task domain?
- Is attention divided? If so, is it handled properly?
- Usability in face of logical configuration changes
- How well is privacy supported (esp. for more public displays)?
- Enhances shared understanding in collaborative task
Task aspects
- Scale
- Length of time
- Frequency of occurrence
- Complexity
- Number of people involved in the task and interaction between them
- Domain-specificity
- Type/Style: Conceptual v. detail; Content creation v. sharing, etc.
- Outcome: joint/group or personal
- Multiple tasks occurring simultaneously
- Play v. productivity
- Amount of system or personal interaction
- Visualization v. heavy manipulation
- Real v. digital representation
- Focus v. peripheral/ambient
- Goal orientation (how concretely stated can the goal be?)
Other considerations
- "What theory from other communities can help inform evaluation?"
- Compatibility of "my system" to "yours"
- Mechanics of collaboration
- Who's looking where when?
- Frequency and type of transitions among displays
- Degree of adaptation
- Degree of personalization
- Degree of spatial arrangement control (How necessary?, What's the effect?)
- Degree of coupling
- "Is the effect worth the cost?"
- Creativity theory as it relates to design (more pixels gives more room for ideas)
Pictures of Posters
[back to the top][ jpeg ] Evaluation
[ jpeg ] More evaluation
[ jpeg ] Physical and Logical Setup
[ jpeg ] More Physical and Logical Setup
[ jpeg ] People
[ jpeg ] Tasks