Java Event Handling

From JavaProgramming

Managing graphical components (buttons, text fields, etc.) and handling events from these components is allegedly smarter in Java compared to MS Windows API

Event handling in Java is quite different from MicrosoftWindows.

In Windows, graphical components send messages to each other. Every graphical component is in fact a window. The window has a method that polls for incoming messages.

In Java, event handling is an implementation of the GangOfFour ObserverPattern; see JavaAwtToolkit.

All Java graphical components that send messages must:

All Java graphical components that receive messages must:

The Applet below demonstrates how to register an event listener for a button.

  /* TestEvents.java */
  import java.applet.Applet;
  import java.awt.*;
  import java.awt.event.*;

public class TestEvents extends Applet implements ActionLis-tener { private int count = 0; TextField text; Button buttonInc, buttonDec; public void init() { // Add 3 components to the Applet: a text field // and two buttons for incrementing and decrementing // the value in the text field text = new TextField("" + count); add(text); buttonInc = new Button("Increment"); add(buttonInc); buttonDec = new Button("Decrement"); add(buttonDec); // Register the Aapplet as listener for buttonInc buttonInc.addActionListener(this); // Here is an alternative technique: // Use an anonymous inner class as listener for buttonDec buttonDec.addActionListener(new ActionListener() { public void actionPerformed (ActionEvent ae) { text.setText("" + --count); } }); } // The Applet must implement the method actionPerformed() // in order to implement interface ActionListener public void actionPerformed(ActionEvent ae) { if(ae.getSource() == buttonInc) text.setText("" + ++count); } }


Why Not On-X Event Methods?

I always found Java's "add listener" approach annoying. If we are going to have OOP GUI's (not my favorite but they're probably here for a while for good or bad), then widgets having "onX" event methods is far more natural in my opinion. I don't know why the industry went away from those. Java has a habit of putting concepts into as many different classes as possible. It's almost as if they are providing a low-level service, a GUI assembler language for systems programmers, not giving a shit about the end-user (app developers). --top

{From my own experience in both GUI development and process-control, there often arise situations where it is convenient for more than one component to react to an incoming event. The 'onX' approach to this problem can be made to work - the 'onX' method can take explicit action to dispatch the event to all other components that might be listening to it. But this has a disadvantage: the 'onX' method must be explicitly aware of all components or objects that are interested in the event. Now, for small projects by a single team, that isn't really a problem... but quite often the writer of the component doesn't know in advance (and, frankly, doesn't want to know) whom else is listening to the event. In this case, the listener approach is a proven pattern that works (at least in the majority of cases) where one object wishes to listen to another. In my own experience, the cases where it doesn't much help are where one requires pattern-matching over a sequence of minor signals or state that together form something meaningful (e.g. mouse gestures, context-sensitive macros, hitting a key-sequence within a 500 millisecond period, reading voice or body-language, etc.).}

{That isn't to say the 'onX' approach is bad for -syntax-. It would be neatly convenient for application writers who don't care about the extra capabilities if one could simply write an 'onX' method and have it essentially transform into an appropriate 'add listener' upon self. Your complaints about Java being too low level for the tasks oft requested of it are well founded - in a sense, any language is too low-level if you need to specify that which you consider irrelevant. SyntaxMatters. This sort of thing can be solved with meta-programming libraries, but the Java designers have their own reasons for rejecting metaprogramming (in the form of macros or templates) - not reasons with which I agree, but decent reasons with respect to their own design goals.}

{You, being a promoter of Relational, would probably be interested to hear that I believe (after more than a little analysis) that the most general solution is to centralize ALL signals and ALL subscribable state (e.g. into a micro-database) and then allow subscribers to receive 'events' based on pattern-matching over these signals and state. This simplifies many problems associated with 'Add Listener' approach - especially those regarding the creation and deletion of objects to which you wish to listen (which anyone will tell you can be a major hassle, much more so when multi-threaded). As a benefit, one gains an event-log (very nice for debugging and for replaying sessions when performing event-oriented unit-tests). Of course, this concept trips over state-of-the-art in programming languages, which lack the pattern-matching utilities and pattern-expression syntax required for such subscriptions to be performed in a manner even vaguely resembling 'efficiently' or 'conveniently'. Long term, though, I expect this react-on-pattern-match approach to become a common programming approach (possibly even an 'orientation'), followed or intermixed with goal-oriented AI programming styles (where one writes how to act, not how to react, and which ALSO requires a centralized database of state and events - a condition that would allow it to evolve more readily after the state and events have been centralized for other reasons). At the moment, you'll only find this sort of thing in the toy languages and AI languages of research labs (though I've seen some preliminary examples of these approaches fielded (in real projects) in the form of DomainSpecificLanguages).}

Since the introduction of Annotations in Java, it is possible now to mix the two approach. Annotating the "target" on-X methods and make the event source call it directly. EJB3 use this model, and http://neoevents.googlecode.com provides a simple framework to do it with all the standard J2SE Events.

--AnonymousDonor

Perhaps this all belongs in NonOopGuiMethodologies, but what you talk about with regard to pattern matching is indeed very similar to a GUI event table I proposed several years ago at:

http://www.geocities.com/tablizer/prpats.htm#guievents

(It's a table with the following columns: EventID, Application, Form, Widget, Event, Implementation, Priority.) {I'd add timing info.}

I used asterisks for wild-carding forms and events. (User/role-level wild-carding and widget type could also be added.) I've actually used very similar wild-card tables for data translation/conversion. Of course I see GreencoddsTenthRuleOfProgramming playing out with regard to managing the sheer volume of widgets, forms, event handlers, etc. :-)

Anyhow, as far as On-X being too specific in Java and OOP frameworks, I would assume there is a GUI engine in Java that has access to all events. If there is a special need to monitor/log/capture all events or at least bulk monitoring, then a general "interceptor" method could be provided that would execute before any widget-specific event. The interface could look something like this:

  method masterEventMonitor(application, form, widget, widgetType, event) {
     if (widgetType=="button") {
        doSomethingSpecial(...)
     }
  }

It is basically wild-carding via IF statements instead of pattern matching tables/structures. The point is that such a thing would not be mutually-exclusive to the On-X style. I don't see why they would have to be mutually-exclusive. --top

{The 'add listener' approach is also not mutually-exclusive to the 'onX' style, since an 'onX' method can be transformed into an 'add-listener' in a very mechanical manner. Java simply lacks support for the meta-programming and syntax extension that could make such things automatic (and, therefore, convenient).}

{As to a reason for mutual-exclusion, there is a fairly general rule well proven in programming: OnceAndOnlyOnce. If the awt engine has listener-capability, there isn't much reason to have a redundant and independent 'onX' capability - simply have the 'onX' capability leverage the listener capability.}

{If one has the pattern-matching capability over event-logs, there is no reason for the listener capability - one gains the same by simply subscribing for simplistic signal-patterns over a single widget. You have argued before that adding more (non-orthogonal) features has limited returns and a high cost on the language implementors. (You even had some name for it... feature overload or something.) An independent 'onX' mechanism is completely non-orthogonal - its capability is completely overlapped by listeners or pattern-matching mechanisms. Given that you have made such arguments, I'd expect you to agree that the 'onX' approach would be best implemented as pure SyntacticSugar atop the most general mechanism the language and library provides for the same task.}

{Anyhow, it is likely that a GUI engine somewhere has access to all the events. What ultimately matters is whatever is exposed to the user of the engine (the app developer) - e.g. whether the user is capable of extending the table of 'if' conditions.}

{I would note that 'if' conditions themselves make for some rather inconvenient and inefficient approaches to multi-signal pattern-matching. For keyboard macros (e.g. "hits 'CTRL, CTRL, CTRL' in under 500 milliseconds") you need to keep state of recent events associated with the method. To disambiguate (e.g. keyboard macros 'CTRL+x,CTRL+c' vs. 'CTRL+z,CTRL+x,CTRL+c'), the code becomes rather arbitrarily complicated. Also, it would be painful to extend dynamically (so the end-user of the application can add or modify macros and gestures) without essentially rewriting these utilities per-application. I think some sort of pattern-expression sub-language is really needed, where one can describe relevant patterns - mouse clicks, (imprecise) mouse-gestures, timings, keyboard sequences, character-entry (for text-boxes), etc. Even if it was expressed primarily in strings, it would work better than if-wildcarding as the tool exposed to the app-developer.}

That's true. In practice the GUI engine should probably not pass certain keyboard and mouse activities unless explicitly switched on. Non-buttoned (clicked) mouse movements and non-meta (ctrl, alt) keyboard activity could probably be excluded from a master listener/handler by default. And/or a GUI engine could have a mouse and keyboard widget defined, and one simply adds an OnX to those two objects if they want finer control.


CategoryGui


EditText of this page (last edited July 8, 2010) or FindPage with title or text search