When programming modern graphical user interfaces, we have to deal with an important new variety of programming, normally called event driven programming. The operating system captures external ``events'' such as keystrokes and mouse movements and passes them on to the program we write, which must then react sensibly. For instance, if the mouse is positioned over a button and then clicked, we must activate the function indicated by the button.
At a low level, the minimum support that we need is the ability to run a main routine in parallel with a secondary routine that responds to these events. For instance, if the program is a web browser, then the main routine will display web pages. Whenever a hyperlink is selected, the act of displaying the current page is interrupted and the browser initiates a connection to a new page.
The next question is the form in which the events are passed. At the most basic level, the events of interest are of the form ``key 'a' was pressed'', ``the mouse moved to position '' or ``the left button of the mouse was pressed''. If the program only gets such information from the underlying system, then the programmer has to do a lot of work to keep track of the current position of all the graphical objects being displayed so that the events can be correlated to the position of these objects.
Consider, for instance, how the program would figure out that a button labelled OK has been selected by the user. First, the program has to remember the current location of the button on the screen. Next, the program has to track all ``mouse move'' events to know where the mouse is at any point. Suppose the program now gets an event of the form ``mouse click'', and the currently recorded position of the mouse is . If lies within the boundary defined by the button labelled OK, then the user has actually clicked on the button OK and appropriate action must be taken. On the other hand, if does not lie within the current boundaries of the OK button, then some other button or graphical component has been selected and we then have to figure out which component this is. Or, yet again, it may be that this mouse click is outside the scope of all the windows being displayed by the current program and can hence be ignored.
Needless to say, programming graphical displays at this low level is very tedious and error-prone. We need a higher level of support from the run-time environment of the programming language. The run-time environment should interact with the operating system, receive low level events such as keystrokes and mouse movements and automatically resolve these into high level events indicating when a component such as a button has been pressed. Thus, the programmer does not have to keep track of where a graphical component is or which component is affected by a particular low level event: implicitly, whenever a graphical component is selected, it gets a signal and can invoke a function that takes appropriate action.
This support is available in Java through the Swing package, which in turn relies on a lower level part of Java called AWT (Abstract Windowing Toolkit). In Swing, we can directly define graphic objects such as buttons, checkboxes, dialogue boxes, pulldown menus ...and specify the size, colour, label ...of these components. Each of these components is defined as a builtin class in Swing. In addition, each component is capable of causing some high level events. For instance, a button can be pressed, or an item can be selected in a pulldown menu. When an event occurs, it is passed to a prespecified function.
How do we correlate the objects that generate events to those which contain the functions that respond to these events? Each component that generates events is associated with a unique collection of functions that its events invoke. This collection is specified as an interface. Any class that implements this interface is qualified to be a listener for the events generated by this type of component. The component is then passed a reference to the listener object so that it knows which object is to be notified when it generates an event.
For instance, suppose we have a class Button that generates a single type of event, corresponding to the button being pushed. When a button is pushed, we are told that a function called buttonpush(..) will be invoked in the object listening to the button push. We handle this as follows:
interface ButtonListener{ public abstract void buttonpush(...); } class MyClass implements ButtonListener{ ... public void buttonpush(...){ ... // what to do when a button is pushed } ... } ... Button b = new Button(); MyClass m = new MyClass(); b.add_listener(m); // Tell b to notify m when it is pushed
The important point is that we need not do anything beyond this. Having set up the association between the Button b and the ButtonListener m, whenever b is pressed (wherever it might be on the screen), the function m.buttonpush(...) is automatically invoked by the run-time system.
Why does buttonpush(...) need arguments? The event generated by a Button has structure, just like, say, an exception or any other signal. This structure includes information about the source of the event (a ButtonListener may listen to multiple buttons) and other useful data.
In a sense, the Timer example that we saw in Chapter 5.1 fits in this paradigm. Recall that we wanted to start off a Timer object in parallel with the current object and have it report back when it finished. The act of finishing is an event and the function it triggers is the one we called notify(). In the example, the class that created the Timer passed a pointer to itself so that the Timer object could notify it. We could instead have passed a reference to any object that implements the interface Timerowner. Thus, the object we pass to Timer is a listener that listens to the ``event'' generated by the Timer reaching the end of the function f().
The relationship between event generators and event listeners in Java is very flexible. Multiple generators can report to the same listener. This is quite natural--for instance, if we display a window with three buttons, each of which describes some function to be performed on the content of the window, it makes sense for the window to listen to all three buttons and take appropriate action depending on which button is pressed.
More interestingly, the same event can be reported to multiple listeners. This is called multicasting. A typical example is when you want to close multiple windows with a single mouse click. For instance, suppose we have opened multiple windows in a browser. When we exit the browser (one click) all these windows are closed.
This flexibility also means that the connection between event generators and event listeners has to be set up explicitly. If no listener is notified to the event generator, the events that it generates are ``lost''. In some languages, each component (such as a Button) is automatically associated with a fixed listener object. The advantage is that it is almost automatic that all events are listened to and not lost. However, the disadvantage is a loss of flexibility in the ways we can interconnect event generators and event listeners.