Thursday, June 19, 2008

PDAs and Shared Public Displays: Making Personal Information Public, and Public Information Personal

Saul Greenberg, Michael Boyle and Jason Laberge
Department of Computer Science and Department of Psychology, University of Calgary, Calgary, Canada

This research is mainly interested in how people publicize their personal information and how do they take them back again.

It considers two new technologies (1999) that help with CSCW
  • Personal Digital Assitants (PDAs)
  • Single Display Groupware (SDG)
The whole idea is to use these devices as appliances based on Norman's idea of information Appliances:
...specialising in information: knowledge, facts, graphics,
images, video or sound. An information appliance is designed to perform a specific activity. A distinguishing feature of
infomration appliances is the ability to share information among themselves.

It distinguished between the meanings of Personalized and Public artefacts as things created and manipulated by one individual versus things that are created and maintained as a result of group work.

There is two issues to personal notes that become public
- There is the content itself ( the content)
- There is the mobility of the content

Mobile computing can be used for the following purposes
  1. Mobile devices can serve as a means for people to augment real-time personal communications
  2. The mobile device lets people download information, modify it, and upload it
  3. The mobile device lets people gather personal information in the field which is then uploaded into a commercial database
  4. The mobile person can be given access to one's workstation environment
  5. Some mobile systems synchronize personal information across devices
  6. More powerful are synchronization systems that let people synchronize both personal and public information across devices
  7. Techniques for information sharing across devices delegate the notion of public versus private into low-level interaction techniques.
They developed a system called SharedNotes to support interaction for personal work, moving from personal to public, public arena, and between meetings.

concolusions:
  • PDA should be considered an qual rather than subordinate partner
  • Each device should be treated as a different entity enabling different but complementary acts (This is in contrast to what Gostner et al. tried to achieve out of their study. Their study has been just so wrong as they wanted people to consider the combination of the screen and the device as a unique entity)
  • Based on their observations and findings it was easier to use the keyboard to interact directly with the public screen rather than to deal with the PDA as it is hard to enter stuff to the PDA. This is again against the initial assumption by Gostner et al.!)
  • They tried to used PDA as the major place to create notes and the display as the major place to show the notes thus assigning separate personalities to different entities in their domain
  • They don't allow people to have their notes back once they have them posted on the public display thus taking the right away from the original author to edit or remove the post he generates. It is however important and has mentioned by their participants too that your thoughts remain your thoughts even if you discuss it with other people. You should have the control over them, everybody should know how you have received them and everybody should know whether you are the one who owns the copyright for it or not. It is very easy to manage if you bind a username to the notes that are produced as then you can manage authentication and synchronization of notes that are appeared to the others
  • Automatic publication denies users the opportunity to express personal relevancy, and discourages them from using the tool (VERY IMPORTANT)
  • Each device should be designed to maximize its strengths and capabilities. Again what Gostner et al. have done is a complete rejection of this claim.

Wednesday, June 18, 2008

Reading with mobile phone and large display

Roswita Gostner, Hand Gellersen, Chris Kray, Corina Sas

the paper tries to identify how interaction with two devices (mobile phone & screen) is preferred over interaction with the cellphone itself or the screen alone. The authors introduce three main hypotheses to investigate
  1. completion time for SD+LD is faster than for SD only
  2. completion time for SD+LD is faster than for LD only
  3. SD+LD will be considered the easiest one for use by users
Dependent variables
  • various user responses to the technology
Independent variables
  • device combinations
I think some of the design issues affect what they want to measure. So the device combination is not the only thing changing in their methodology. They have different methods of interaction with the system. The keyboard size and shape is changing. The way the content is presented to the users is changing as well, so these things definitely affect the perception of the users while interacting with the large screen! I think considering the device combinations as the only independent variable is totally wrong!

The authors describe the design and the apparatus which is the physical specification of their environment. they describe the procedure and the participants and finally demonstrate the results.

Typing using the touch screen was not more difficult than using the phone! The mobile phone was ranked the second place compared to the large screen in terms of ease of use. This is interesting because it seems to me like they had typed it in faster using the screen but then they have considered the mobile phone easier to use when compared to the toch screen. Still the combination was considered faster.

The participants found working between two screens confusing! Looking at them as two separate entities as opposed to one single entity. I think in reality too it is also two separate entities. We dont do things on the mobile phone as part of the things that we can do with one computer but we actually consider it as an assitive secondary tool to do the task of typing!

Small meets Large

Alan Dix, Jan 2005, lancaster University. Internal Document

some potential use cases for combining SD/LD
  • SD to perform large display selections
  • SD to scroll, spin, or move the objects in the large display
  • navigate menus on SD to control LD content
  • use LD selection to control LD content
  • move content locus ... fluid exchange of information between SD and LD
  • move interaction focus
We should separate a collaborative work from large displays as LD is a combination of individual, grou, and community interactions all competing for the same screen real state. SD may give opportunities to help the conflicts.
  • The location of the screen
  • The size of the screen
  • The angle: affects the readability
Cases where small devices help with controlling the content
  • controlling pace of input: navigation to be performed on the small devices leading to change of content thus no navigation feedback. This undermines the fact that bystanders require to get the feedback to realize that an interaction is happening. This disables them from doing anything like that!
  • monitoring pace of delivery: The information about when items of interest are coming. This is a very interesting way of approaching users. Indicating when they should expect to receive information on the large public display.
  • escaping pace of output: Changes in the display can be recorded to be later watched by replaying or by going to a web portal or things like that. How is it useful?
Integration of SDs and LDs
  • shared services and information: services used by SD that affect the content of LD but indirectly ... (like voting systems).
  • incidental links: doing because of seeing
  • shared reference: a code to be taken from the public display and used later on by the phone so that it can be used for some purpose.
  • SD as input device: to use the phone as a mouse or something similar
  • uniform interaction environment: the same screen to be appeared on both devices
Interaction mechanisms
  1. Interaction method and binding challenges
  2. Knowing what you can do, when and how
  3. building suitable interaction based on the capabilities of the device
    1. self description based on the capability ontology of the device
    2. plug n play connection of the SD to the LD
    3. content negotiation (Need to know the possibility of interaction)
      1. flashing cameras
      2. combined device capabilities
      3. where the information is supported by both sides
      4. How this can be brought into the attention of the users
Situated displays
  1. multiple interaction with the same data
    1. where the locus of interaction and feedback is?!
    2. Fixed bindings
      1. input devices
      2. screens
      3. people
    3. rendering for different devices (not a good topic to investigate)
    4. How to dynamically change content between the large and public display?
Authentication, Security, and Privacy
  1. mutual need for trust between the (user+SD+service) & (public display)
    1. its users job to establish trust in the device
    2. Which displays are we connected to?
    3. Allowing access only to the trusted services
    4. Defining policies about what content should be displayed on the displays
A series of defined and determined scenarios:
  • Use of glyphs for attachment to display, selection, and movement, resizing, etc.
    • The glyphs should be different in shape but the size of the glyph doesn't really matter
    • glyph drag uses its location within the camera field rather than sweep line
  • Use of sweep or joystick to deliver abstract interaction
    • Using the joystick to draw things using a brush spray on a bill board
  • Use of sweep to naturally control virtual navigation
    • Using the cellphone to move in the sky and interact with the stars
  • Use of SDs to give local interfaces combined with large screen
    • The individual heads up can be shown on the screen of the mobile device
    • The large display can show the bird eye about the whole status of the game
  • Use of SD to give local navigation for shared large screen
    • To select something to see on the large display
    • Showing a queuing timer on the SD to show when the content is going to be provided
      • to smoothen the flow of information
      • Hiding who has chosen to see a particular content, preventing from social embarrassment.
  • Snap it later (use of SD to escape from fixed temporal flow of LD)
    • He can control the flow of information shown on the display using his cellphone too
    • He can take the information with him and reuse it later to access the same information over the internet.

Enticing People to Interact with Large Public Displays in Public Spaces

Brignull, H., & Rogers, Y. (2003). Enticing people to interact with large public displays in public spaces. Proceedings of INTERACT'03 (pp. 17–24). Zurich, Switzerland, September 1-5, 2003.

How to attract users?
  • novelty and ambiguity
People around public displays carry out other activities.

Goals of the research
  1. the flow of people around public displays
  2. the level and type of interaction
  3. the transition that occurs between types of interaction
  4. factors that cause social awkwardness and embarrassment
Honey pot effect: Attracting other people to interact with the system by standing in its near proximity and expressing their facial or behavioral expressions.

The bottlenecks in the system
  • belief in how interesting the system is
  • Perception of what it is, how to use it, how long it takes to use it
  • Understanding its social standing and required etiquette
  • Knowledge of the social system
  • The difficulty of using the system for them to pick it up
broadcasting the results to the public may result in a public shame by all the people involved in the process of interaction. What about bomberman? isnt it going to result in public shame?

Different classifications
  1. Peripheral awareness activities
  2. Focal awareness activities
  3. Direct interaction acitivities
Specially for moving from the first category to the second category people have to be motivated and the intentions and the benefits of the public display should become clear to them.

motivation -> stimulation
Placing the display in a proper vicinity->

Public Displays and Private Devices: a design space analysis

Alan Dix and Corina Sas, Public displays and private devices: A design space analysis, Proceedings of the SIGCHI conference on Human factors in computing systems (CHI 2008).

Interaction device uses
  • selection or pointing
  • text input
  • personal memory/storage
  • personal identification
  • display identification
  • content identification
  • bespoke sensing
  • display/interaction surface
people in an urban theatre
  • performers
  • participants (witting & unwitting)
  • bystanders (witting & unwitting)
  • passers-by
The uninteresting parts of interaction can be offloaded to the mobile device during the period of interaction.

Conlflicts
  • content
    • conflict between the use of the screen for content or interactive feedback
    • conflict between different users wanting specific content
    • conflict between the particular requirements of an individual and maintaining a content stream that is appealing for the bystanders
  • pace: The pace of the user is different from that of the system as s/he is not the only one controlling the system.
    • Not having the things right when they want it
    • the flow of information can not be altered by any particular user
Spatial context
  • fully public
  • semi-public
  • semi-private
  • bound to the context
    • no coupling
    • weak coupling
    • close coupling
    • dynamic coupling
Device interactions
  • alternative interface: Content is shared across the displays
  • secondary interface: Using SMS to update the display (e.g.)
  • coherent interface: single interaction that involves both screens
    • The displays are used simultaneously