One of the most important parts of creating an effective and intuitive user interface on touch-enabled smartphones has nothing to do with visual appearance—instead, it has to do with creating an interface that properly responds to user input based on touch. For Web applications, this means replacing mouse events with touch events. In Dojo 1.7, new touch APIs help make this process easy.
This is an updated version of the post Touching and Gesturing on the iPhone, published in 2008.
In the beginning…
Before we discuss the new features in Dojo 1.7 that make touch interfaces easier to create, it helps to understand some of the underlying technology and concepts. With iPhone, Apple introduced two new event concepts: touches and gestures. Touches are important for keeping track of how many fingers are on the screen, where they are, and what they’re doing. Gestures are important for determining what the user is actually doing when they are interacting with the device at a higher level: pinching, rotating, swiping, double-tapping, and so on.
While touch events are available on most platforms (the touch event model originally established on iOS has been standardized in the W3C Touch Events specification and is supported by iOS, Android, and BlackBerry), native gesture events are not available everywhere, and the gesture event API in iOS is limited in the sorts of gestures it supports. The dojox/gesture
package steps in to fill these gaps in functionality; we’ll discuss it shortly.
Touches
When you put a finger down on the screen, it kicks off the lifecycle
of touch events. Each time a new finger touches the screen, a new touchstart
event happens. As each finger lifts up, a touchend
event happens. If, after touching the screen, you move any of your fingers around, touchmove
events happen. If too many fingers are on the screen, or another action
(such as a push notification from the phone’s OS) interferes with the
touch, a touchcancel
event happens.
The following touch events exist:
touchstart
: Occurs when a finger is placed on the screentouchend
: Occurs when a finger is removed from the screentouchmove
: Occurs when a finger already placed on the screen is moved across the screentouchcancel
: Occurs when a touch is cancelled before the finger is actually removed from the screen
While it might seem that there should be a 1:1 mapping between a
touch event and a mouse event—after all, your finger works much like a
cursor—it turns out that TouchEvent
objects do not include properties that you might expect to see. For example, pageX
and pageY
properties are not populated. This is because, with a mouse, you really
only have one point of contact: the cursor. With a multi-touch device,
though, you could (for example) keep two fingers held down on the left
of the screen while you tap the right side of the screen, and all three
points are registered.
In order to provide information about all touch points at once, every TouchEvent
object has a property containing information about every finger that’s
currently touching the screen. It also has two other properties: one
which contains a list of information for fingers that originated from
the current target node, and one which contains only the information for
fingers that are associated with the current event. These properties
are:
touches
: A list of information for every finger currently touching the screentargetTouches
: Liketouches
, but is filtered to only the information for finger touches that started out within the same nodechangedTouches
: A list of information for every finger that has changed state due to the event (see below)
To better understand what might be in these lists, let’s go over some examples quickly.
- When you put one finger down, all three lists will provide the same information.
- When you put a second finger down,
touches
will contain two items, one for each finger.targetTouches
will have two items only if the second finger was placed in the same node as the first finger (otherwise it will only contain the second finger).changedTouches
will only have information related to the second finger, because it’s what triggered the event. - If you put two fingers down at exactly the same time, you will get two items in
changedTouches
, one for each finger that triggered the event. - If you move your fingers, the only list that will change is
changedTouches
. It will contain information about the finger or fingers that moved. - When you lift a finger, it will be removed from
touches
andtargetTouches
, and will appear inchangedTouches
, since it’s what caused the event. - Removing your last finger will leave
touches
andtargetTouches
empty, andchangedTouches
will contain information about the last finger.
Using these lists, it is possible to keep very close tabs on what the user is doing. Imagine creating a(nother) Super Mario clone in JavaScript—you’d be able to tell what direction pad the user currently has his or her thumb on, while also being able to watch for when the user wants to jump or shoot a fireball when they touch another virtual button elsewhere.
So far, we’ve been discussing lists of information about fingers on
the screen, but we haven’t talked about what this information looks
like. The objects contained in the touches lists have properties similar
to what you’d see on a MouseEvent
object. The following is the full list of properties for these objects:
clientX
: X coordinate of touch relative to the viewport (excludes scroll offset)clientY
: Y coordinate of touch relative to the viewport (excludes scroll offset)screenX
: Relative to the screenscreenY
: Relative to the screenpageX
: Relative to the full page (includes scrolling)pageY
: Relative to the full page (includes scrolling)identifier
: An identifying number, unique to each touch point (finger) currently active on the screentarget
: The DOM node that the finger is touching
One of the annoyances of writing Web applications for smartphones has
been that even if you set a viewport for your application, dragging
your finger around will move the page. Fortunately, the touchmove
event object has a preventDefault
method that can be used to keep the page still.
Drag and drop with the Touch API
Creating drag and drop functionality on touchscreen devices is made easier due to the fact that touchmove
events only fire when a finger is already touching the screen’s
surface. This means we don’t need to track button states like we would
with a mousemove
event. A basic drag and drop implementation, then, can look as simple as this:
1 2 3 4 5 6 7 8 9 10 11 12 13 | node.addEventListener( "touchmove" , function (event){ // Only deal with one finger if (event.touches.length == 1){ // Get the information for finger #1 var touch = event.touches[0], // Find the style object for the node the drag started from style = touch.target.style; // Position the element under the touch point style.position = "absolute" ; style.left = touch.pageX + "px" ; style.top = touch.pageY + "px" ; } }, false ); |
Better touching in Dojo 1.7
One of the problems with using low-level touch events is that, if you are creating an application that you want to function on both touch-enabled or mouse-enabled devices, you end up needing to set up two sets of event listeners. The new dojo/touch module in Dojo 1.7 normalizes these two types of events, using touch events where available and falling back to mouse events on other platforms in order to provide device-neutral events. Using it is just as easy as listening for a regular event, except instead of passing a string for the event name, you instead pass a function in place of the event name:
1 2 3 4 5 6 7 8 | require([ "dojo" , "dojo/touch" ], function (dojo, touch){ dojo.connect(dojo.byId( "myElement" ), touch.press, function (event){ // handle a mousedown/touchstart event }); dojo.connect(dojo.byId( "myElement" ), touch.release, function (event){ // handle a mouseup/touchend event }); }); |
Gestures
On iOS devices, a gesture event occurs any time two or more fingers
are touching the screen. If any finger lands in a node you are listening
for gesture events on (gesturestart
, gesturechange
, gestureend
), you’ll receive the corresponding gesture events.
Gesture events provide a GestureEvent
object with these properties:
rotation
: The amount the user has rotated their fingers, in degrees.scale
: A multiplier indicating the amount the user has pinched or pushed their fingers, where numbers larger than 1 indicate a push, and numbers smaller than 1 indicate a pinch.
When listening for both gesture events and touch events, the event pattern looks like this:
touchstart
for finger 1.gesturestart
when the second finger touches the surface.touchstart
for finger 2.gesturechange
sent every time both fingers move while still touching the surface.gestureend
when the second finger leaves the surface.touchend
for finger 2.touchend
for finger 1.
Resizing and rotating with the Gestures API
Using the CSS transform
, width
, and height
properties, we can easily rotate and scale any element in response to these gestures.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | var width = 100, height = 200, rotation = 0; node.addEventListener( "gesturechange" , function (event){ var style = event.target.style; // scale and rotation are relative values, // so we wait to change our variables until the gesture ends style.width = (width * event.scale) + "px" ; style.height = (height * event.scale) + "px" ; style.webkitTransform = "rotate(" + ((rotation + event.rotation) % 360) + "deg)" ; }, false ); node.addEventListener( "gestureend" , function (event){ // Update the values for the next time a gesture happens width *= event.scale; height *= event.scale; rotation = (rotation + event.rotation) % 360; }, false ); |
Better gestures with Dojo 1.7
Dojo 1.7 includes a new package, dojox/gesture,
that provides functionality for handling more complex gestures on
touch-sensitive devices. In addition to defining a basic framework for
creating your own custom gestures through extension of the dojox/gesture/Base
module, it comes with built-in support for several common gestures, including tap, tap and hold, double tap, and swipe.
Using dojox gestures couldn’t be much simpler. Just like dojo/touch,
in order to listen for a gesture, you simply connect to a gesture event
using dojo.connect
, passing the gesture function in place of the event name:
1 2 3 4 5 6 7 8 9 10 | require([ "dojo" , "dojox/gesture/swipe" , "dojox/gesture/tap" ], function (dojo, swipe, tap){ dojo.connect(dojo.byId( "myElement" ), swipe, function (event){ // handle swipe event }); dojo.connect(dojo.byId( "myElement" ), tap.doubletap, function (event){ // handle double tap event }); }); |
In time, dojox/gesture will be extended to include more complex event types and behaviors, such as pinching and zooming. For now, it provides several new events that were difficult to handle before, as well as an excellent framework that can be used to create complex gesture events across all platforms.
Source : http://www.sitepen.com/blog/2011/12/07/touching-and-gesturing-on-iphone-android-and-more/