Friday 6 January 2017

HCI QUIZ AND ASSIGNMENT 4 DEADLINE 9/1/2017

ASSIGNMENT 04
QUIZ 03

Assignment # 4 
SOLUTION
ANS 1:

There are many usability principles that can be brought to bear on an examination of
scrolling principles. For example:

Observability The whole reason why scrolling is used is because there is too much
information to present all at once. Providing a means of viewing document contents
without changing the contents increases the observability of the system. Scrollbars
also increase observability because they help to indicate the wider context of the
information which is currently visible, typically by showing where the window of
information fits within the whole document. However, observability does not
address the particular design options put forth here.


Predictability The value of a scrolling mechanism lies in the user being able to know
where a particular scrolling action will lead in the document. The use of arrows on
the scrollbar is to help the user predict the effect of the scrolling operation. If an
arrow points up, the question is whether that indicates the direction the window is
being moved (the first case) or the direction the actual text would have to move (the
second case). The empirical question here is: to what object do users associate the
arrow – the text or the text window? The arrow of the scrollbar is more closely
connected to the boundary of a text window, so the more usual interpretation
would be to have it indicate the direction of the window movement.

Synthesizability You might think that it does not matter which object the user associates
to the arrow. He will just have to learn the mapping and live with it. In this
case, how easy is it to learn the mapping, that is can the user synthesize the meaning
of the scrolling actions from changes made at the display? Usually, the movement
of a box within the scrollbar itself will indicate the result of a scrolling operation.

Familiarity/guessability It would be an interesting experiment to see whether
there was a difference in the performance of new users for the different scrolling
mechanisms. This might be the subject of a more extended exercise.
Task conformance There are some implementation limitations for these scrolling
mechanisms (see below). In light of these limitations, does the particular scrolling
task prefer one over the other? In considering this principle, we need to know what
kinds of scrolling activity will be necessary. Is the document a long text that will be
browsed from end to end, or is it possibly a map or a picture which is only slightly
larger than the actual screen so scrolling will only be done in small increments?

Some implementation considerations:
n What scroll mechanisms does a toolkit provide? Is it easy to access the two options
discussed above within the same toolkit?
n In the case of the second scrolling option, are there enough keys on the mouse to
allow this operation without interfering with other important mouse operations,
such as arbitrarily moving the insertion point or selecting a portion of text or selecting
a graphical item?
n In the second option, the user places the mouse on a specific location within the
window, and gestures to dictate the movement of the underlying document.

ANS 2:
The user control guideline states that, 'The user, not the computer, initiates and controls all actions.' In the case of dialogue boxes, this guideline is clearly contradicted. A dialogue box can be used to indicate when an error occurs in the system. Once this error has been detected and presented to the user in the dialogue box, the only action that the system allows the user is to acknowledge the error and dismiss the dialogue box. The system preempts the user dialogue, with good reason. The preemptive nature of the dialogue box is to ensure that the user actually notices that there was an error. Presumably, the only errors that will be produced in such an intrusive manner are ones which the user must know about before proceeding, so the preemption is warranted. But sometimes dialogue boxes are not used to indicate errors and they still prevent the user from performing some actions that they might otherwise wish to perform. The dialogue box might be asking the user to fill in some information to specify parameters for a command. If the user does not know what to provide, then they are stuck. A lot of the time, the user can find out the information by browsing through some other part of the system, but in order to do that they must exit the dialogue box (and forfeit any of the settings that they might have already entered), find out the missing information and begin again. This kind of preemption is not desirable. It is probably this kind of preemption the user control guideline is intended to prevent, but it doesn't always get applied. 
It is possible to use notification-based code to produce pre-emptive interface dialog such as a modal dialog
box, but much more difficult than with an event-loop-based system. Similarly, it is possible to write
event-loop-based code which is not pre-emptive, but again it is difficult to do so. If you are not careful,
systems built using notification-based code will have lots of non-modal dialog boxes and vice versa.
Each programming paradigm has a grain, a tendency to push you towards certain kinds of interface.
If you know that the interface you require fits more closely to one paradigm or another then it is worth
selecting the programming paradigm to make your life easier! Often, however, you do not have a
choice. In this case you have to be very careful to decide what kind of interface dialog you want before
you (or someone else) start coding. Where the desired interface fits the grain of the paradigm you
don’t have to worry. Where the desired behavior runs against the grain you must be careful, both in
coding and testing as these are the areas where things will go wrong.
Of course, if you don’t explicitly decide what behavior you want or you specify it unclearly, then it
is likely that the resulting system will simply run with the grain, whether or not that makes a good
interface.
ANS 3:

Effective applications are both consistent within themselves and consistent with one
another.
One of the advantages of programming with toolkits is that
they can enforce consistency in both input form and output form by providing
similar behavior to a collection of widgets. For example, every button interaction
object, within the same application program or between different ones, by default
could have a behavior like the one described in Figure 8.8. All that is required is that
the developers for the different applications use the same toolkit. This consistency
of behavior for interaction objects is referred to as the look and feel of the toolkit.
Style guides, which were described in the discussion on guidelines in Chapter 7, give
additional hints to a programmer on how to preserve the look and feel of a given
toolkit beyond that which is enforced by the default definition of the interaction
objects.
Two features of interaction objects and toolkits make them amenable to an objectoriented
approach to programming. First, they depend on being able to define a class
of interaction objects which can then be invoked (or instantiated) many times within
one application with only minor modifications to each instance. Secondly, building
complex interaction objects is made easier by building up their definition based on
existing simpler interaction objects. These notions of instantiation and inheritance
are cornerstones of object-oriented programming. Classes are defined as templates
for interaction objects. When an interaction object is created, it is declared as an
instance of some predefined class. So, in the example quit.c program, frame is
declared as an instance of the class FRAME (line 17), panel is declared as an instance
of the class PANEL (line 22) and the button (no name) is declared as an instance of
the class PANEL_BUTTON (line 23). Typically, a class template will provide default
values for various attributes. Some of those attributes can be altered in any one
instance; they are sometimes distinguished as instance attributes.
In defining the classes of interaction objects themselves, new classes can be built
which inherit features of one or other classes. In the simplest case, there is a strict
class hierarchy in which each class inherits features of only one other class, its parent
class. This simple form of inheritance is called single inheritance and is exhibited in
the XView toolkit standard hierarchy for the window class in Figure 8.9. A more
complicated class hierarchy would permit defining new classes which inherit from
more than one parent class – called multiple inheritance.




QUIZ 3 SOLUTION

QUESTION 4

QUESTION 2 SOLUTION
QUESTION 3 

 QUESTION 1 SOLUTION


HCI LAB TASK

HCI Lab Task


Download the Lab Task from above mentioned HYPER LINK

Till Lab 14 Copies must by completed Till Thursday