POINTER UNIFICATION
Embodiments relate to a computing device having storage, a processor, a display, a first human input device, and a second human input device, where the first human input device is in a first category of human input devices and the second human input device is in a second category of human input devices. The computing device may perform a process involving executing a windowing environment that manages windows of applications executing on the computing device. The windowing environment may receive raw inputs from the first and second human input devices and in turn generate input pointers for the raw inputs, respectively. The input pointers may be or include instances of an input pointer class implemented by the windowing environment, the pointer class used by the windowing environment for arbitrary different types of human input pointer devices including the first human input device and the second human input device.
The Human Interface Device (HID) standard is a protocol that allows human-operated input devices to send data to a host computer. HID is flexible regarding what information devices are allowed to report. Pointer devices (e.g., mice, pens, touch digitizers, etc.) all report x-y coordinates, but depending on the device, data such as pressure, tilt and contact geometry may or may not be reported. In addition, HID allows devices to report custom properties, effectively allowing them to include any arbitrary information. The fact that certain data may or may not be present makes it challenging for developers to write software that supports these devices. Often, a developer must write different sections of code to support different types of devices.
In addition, developers must contend with different user interface frameworks that use different coordinate systems which may be relative to the physical screen, the application window, UI elements within the window, or other reference frames. Keeping track of a variety of coordinate systems is tedious, and forces developers to write code differently based on the UI framework being used.
In addition, when performing gesture recognition, most gesture recognizers work on a full set of inputs. It has not been possible for a developer to code for basic gesture detection at the contact level without sacrificing the use of system-provided gesture recognition.
Finally, legacy applications may expect traditional mouse messages. Such applications may not expect device-neutral pointer messages containing mouse data; there has not been any way to support these applications while at the same time providing a mechanism that transforms generic pointer data into legacy mouse data in an efficient way.
Discussed below are techniques related to providing unified access to inputs (pointers) from pointer devices such as mice, touch surfaces, pens, or other input devices that allow a user to “point” in two or three dimensions.
SUMMARYThe following summary is included only to introduce some concepts discussed in the Detailed Description below. This summary is not comprehensive and is not intended to delineate the scope of the claimed subject matter, which is set forth by the claims presented at the end.
Embodiments relate to a computing device having storage, a processor, a display, a first human input device, and a second human input device, where the first human input device is in a first category of human input devices and the second human input device is in a second category of human input devices. The computing device may perform a process involving executing a windowing environment that manages windows of applications executing on the computing device. The windowing environment may receive raw inputs from the first and second human input devices and in turn generate input pointers for the raw inputs, respectively. The input pointers may be or include instances of an input pointer class implemented by the windowing environment, the pointer class used by the windowing environment for arbitrary different types of human input pointer devices including the first human input device and the second human input device.
Many of the attendant features will be explained below with reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein like reference numerals are used to designate like parts in the accompanying description.
Embodiments discussed below relate to unified handling of pointer devices in a windowing environment. Discussion will begin with an overview of an example prior approach and limitations thereof. A windowing environment that allows applications to handle pointing devices and their inputs in a unified manner is then described, followed by discussion of example implementations.
The windowing environment 100 also may handle inputs from human input devices such as a keyboard 104, a mouse 106, a pen 108, a touch device 110, or any other human-operated input device. Of particular note are pointer-type input devices by which a user can specify arbitrary two or three-dimensional input points and other inputs. Input devices 104, 106, 108, 110 may pass raw input device data 112 up through respective driver stacks 114 to a kernel module 116, which may in turn pass input data to the windowing environment 100 which decides which applications to notify about which inputs or input-activated events.
The windowing environment 100 treats each input device as a separate and distinct data type. That is, the windowing environment 100 may have a different set of data types and interfaces for each type of input device. For example, the windowing environment 100 may have a mouse-specific API, a mouse object class, and may pass mouse-specific messages 118 to applications 120, 122, and 124 (the windowing environment 100 may also pass other window messages 119, e.g., refresh, minimize, move, resize, close, etc.). The windowing environment 100 may also pass touch-specific messages to applications 120, 122, and 124. In short, applications 120, 122, and 124, if they are to operate for any of the input devices 104, 106, 108, 110, must have different code to handle the different types of inputs, even in cases where the different types of input devices have semantic similarities such as with pointer devices.
The windowing environment 100 may also include gesture recognition functionality, possibly passing gesture recognition messages 128 to applications 120, 122, 124. However, if applications 120, 122, 124 are to deal with raw input 126 they may need additional code for handling raw input messages 126 received from driver stacks 114. Such raw input 126 may be unusable with native gesture-recognition functionality provided by the windowing environment 100, thus possibly requiring an application to forego using gesture recognition functions provided by windowing environment 100 and instead, disadvantageously, include custom-written gesture recognition code.
In addition, if the application 140 uses gestures, the pointer message 148 can be passed to an instance of a gesture processor 150 which may use information about the pointer message 148 (i.e., pointer input) to determine whether a gesture or manipulation has occurred. When a gesture or manipulation (e.g., an affine transformation) is identified, the gesture processor 150 in turn may signal the application 140 with an event or callback 151 indicating the identified gesture or manipulation.
While this description refers to classes, objects, event handlers, and other object-oriented constructs, these are only non-limiting examples used for convenience of explanation. Such examples are not to be taken as implying that any of the components or embodiments described herein are object-oriented or require an object-oriented environment. To the contrary, a windowing environment may well have no classes or other object-oriented constructs. Those skilled in the art of computer programming will appreciate that data structures, data types, messages, functions, and other non-object-oriented C-style constructs and APIs can be used with equal effect. Embodiments may also be implemented in simplified object-oriented environments such as Javascript™). Any use of “object” and “class” and related features such as methods and members will be understood, as used herein, as also describing implementations that use data structures and data types, functions, flat APIs, and the like, as the case may be. For example, the Win32™ module uses no classes per se. The companion User32™ module sends, for example, window messages (e.g., pointer messages) to applications, which in turn may process those messages using various associated functions. In sum, the embodiments described herein can be implemented in any type of programming environment using any type of programming language.
Regardless of the type of pointer input device used (e.g., pen, mouse, touch surface, or other), the same gesture handling code may be used by the application. In addition, if necessary, the application may access information about the device associated with a pointer input to handle the input in a device-specific manner.
As mentioned above, it may be desirable to provide the pointer class 142 with a property or field indicating whether a pointer is deemed to be a primary pointer. Various heuristics or rules may be used to determine whether a pointer device's input is primary. For example, a mouse device's inputs may always be considered primary to the exclusion of other device inputs. When multiple concurrent touches are presented by a touch input device, inputs from a first contact point may be given primary status. If a primary touch input is lifted and a second touch input remains down, even though a state is reached where there is not a current primary input, it is not until all touch contacts are determined to have ended (e.g., the second touch is lifted) that another new initial contact can become a new primary input. Note that the concept of a primary pointer can be tied to which input gets promoted to a legacy mouse construct, as described below.
In some cases so-called legacy applications that are not coded to work with the above-described pointer input model may nonetheless execute with the unified-input windowing environment 110A. In one embodiment, the unified-input windowing environment 110A probes a new application to determine if the application will recognize messages for the pointer input model, for instance sending unified-device messages. If an error occurs or there is no response, the unified-input windowing environment 110A may translate the pointer inputs into traditional mouse messages as described with reference to
As mentioned, embodiments and features discussed above can be realized in the form of information stored in volatile or non-volatile computer or device readable media. This is deemed to include at least media such as optical storage (e.g., compact-disk read-only memory (CD-ROM)), magnetic media, flash read-only memory (ROM), or any other means of storing digital information in a way that is convenient for use by a computer, but excluding signals and energy per se. The stored information can be in the form of machine executable instructions (e.g., compiled executable binary code), source code, bytecode, or any other information that can be used to enable or configure computing devices to perform the various embodiments discussed above. This is also deemed to include at least volatile memory such as random-access memory (RAM) and/or virtual memory storing information such as central processing unit (CPU) instructions during execution of a program carrying out an embodiment, as well as non-volatile media storing information that allows a program or executable to be loaded and executed.
Claims
1. A method of managing human input devices for a windowing environment, the method comprising:
- receiving raw inputs from a human input device connected with a computer on which the windowing environment is executing, the windowing environment managing windows of arbitrary applications executing on the computer, the windows displayed on a display of the computer, the human input device can be either a first type of human input device or a second type of human input device, wherein the first type comprises first pointer type and the second type comprises a second pointer type; and
- any given one of applications receiving from the windowing environment pointer messages corresponding to raw inputs from the human input device, the pointer messages comprising respective input points mapped to the display by the windowing environment, the raw inputs comprising respective points of least two dimensions, wherein the raw inputs are mapped to the pointer messages by the windowing environment regardless of whether the human input device is the first or second type of human input device.
2. A method according to claim 1, wherein the windowing environment maps raw inputs of any arbitrary type of input pointer device to pointer messages such that an application coded to handle pointer messages is able to interact with any arbitrary type of pointer input device via such pointer messages.
3. A method according to claim 1, wherein the windowing environment provides an application programming interface (API) through which the applications uniformly access different types of human input devices including at least two of: a touch type of input device, a pen type of input device, and a mouse input device.
4. A method according to claim 3, wherein the pointer messages comprise a same message type that is used for any arbitrary type of input pointer device connected with the computer.
5. A method according to claim 1, further comprising passing, by the applications, some of the pointer messages or data therefrom to gesture processing objects executing on the computer and receiving, by the applications, from the gesture processing objects, indicia of gestures recognized by the gesture processing objects according to the pointer messages or the data therefrom.
6. A method according to claim 1, wherein the pointer messages comprise an identifier, and the applications use the identifier to obtain a device identifier of the human input device.
7. A method according to claim 6, wherein the device identifiers are passed by the applications to the windowing environment which responds with properties of or objects representing the first and second types of the human input devices.
8. A method according to claim 1 wherein the windowing environment implements a pointer class and a pointer device class, the method further comprising the given one of the applications receiving third pointer messages, wherein the first, second, and third pointer messages, which all comprise object instances of the pointer class, the pointer class comprising a position property corresponding to a physical position or location of the corresponding human input device, a device property comprising an instance of the device class, the instance of the device class having values that are specific to the corresponding human input device.
9. One or more computer-readable storage medium storing information to enable a computing device to perform a process, the computing device comprising storage, a processor, a display, and a first human input device, where the first human input device is in a first category of human input devices, the process comprising:
- executing a windowing environment that manages windows of applications executing on the computing device;
- receiving, by the windowing environment raw input from the first input device;
- generating, by the windowing environment, input pointers for the raw inputs, respectively, the input pointers comprising instances of an input pointer data type or message type implemented by the windowing environment, the data type or message type used by the windowing environment for arbitrary different types of human input pointer devices; and
- passing the input pointers to the applications.
10. One or more computer-readable storage medium according to claim 9, wherein some of the input pointers correspond to a first contact of the first input device and some of the input pointers correspond to a second contact of the first input device.
11. One or more computer-readable storage medium according to claim 10, the process further comprising a first application receiving some of the input pointers and a second application receiving some of the input pointers, and wherein the first application uses a first received input pointer to obtain a first pointer device object representing the first human input device and uses a second received input pointer to obtain a second pointer device object representing a second human input device.
12. One or more computer-readable storage medium according to claim 9, wherein the first human input device comprises a mouse and a second human input device comprises a touch surface, the process further comprising a first application passing a pointer input corresponding to the mouse to a first gesture detection module and passing a pointer input corresponding to the touch surface to a second gesture detection module, the gesture detection modules recognizing corresponding gestures and providing indicia thereof to the first application.
13. One or more computer-readable storage medium according to claim 9, wherein the applications use a same application programming interface of the windowing environment to obtain the pointer inputs, the process further comprising one of the applications that receives a pointer input: passing the pointer input or information obtained therefrom to the application programming interface and receiving in return information about a corresponding one of the human input devices.
14. One or more computer-readable storage medium according to claim 9, wherein the pointer data type or message type comprises button states, and the method comprises mapping, by the windowing environment, a button of the first human input device to one of the button states and mapping a contact of the second human input device to one of the button states.
15. One or more computer-readable storage medium according to claim 14, wherein pointer inputs derived from contacts with a touch human input device comprise information indicating shapes of the contacts.
16. One or more computer-readable storage medium according to claim 15, wherein pointer inputs derived from a mouse human input device have values in the their respective contact properties that indicate whether a button of the mouse was pressed, and wherein pointer inputs derived from a touch human input device have values in their respective contact properties that indicate whether a contact with the touch human input device has a pressed-down state.
17. A computing device comprising:
- a processor, storage, display, a first pointer input device and a second pointer input device of a pointer device type that is different than a pointer device type of the first pointer input device;
- the storage storing a windowing system that manages windows for arbitrary applications on the computing device; and
- the windowing system executing by the processor and, according to inputs generated by user physical manipulations of the first pointer input device and the second pointer input device, generating respective pointer messages, the pointer messages having identical properties having corresponding values that vary according to the corresponding physical manipulations of the first and second pointer input devices, wherein an application receives a first pointer message corresponding to the first pointer input device and a second pointer message corresponding to the second pointer input device.
18. A computing device according to claim 17, wherein the pointer messages have a primary property indicating whether the respective pointer messages are primary pointer messages.
19. A computing device according to claim 17, wherein the primary properties are used to determine which pointer messages are promoted to mouse messages.
20. A computing device according to claim 18, wherein the windowing system automatically determines which pointer messages will be designated as primary pointer messages.
Type: Application
Filed: Sep 13, 2012
Publication Date: Mar 13, 2014
Patent Grant number: 9483155
Inventors: Joyce Wu (Redmond, WA), Krishnan Menon (Redmond, WA), Mariel Young (Seattle, WA), Olumuyiwa Durojaiye (Bothell, WA), Reed Townsend (Kirkland, WA), Todd Torset (Woodinville, WA), Uros Batricevic (Redmond, WA), Vipul Aneja (Redmond, WA)
Application Number: 13/615,272
International Classification: G06F 3/0481 (20060101);