USER INPUT DEVICE AND METHOD
In accordance with one implementation, a method is illustrated that allows a computing device to determine a user input. The method includes detecting one or more user input objects in a 3-dimensional field relative to a 2-dimensional surface. The method also includes determining coordinates for the one or more user input objects relative to the 2-dimensional surface. And, the method further includes determining a user input based on the coordinates.
The present application claims the benefit under 35 U.S.C. §119 of U.S. Provisional Patent Application No. 61/751,958, entitled “Computer Keyboard That Senses Hovering and Multitouch Gestures Through a Matrix of Proximity Sensors,” and filed on Jan. 14, 2013, which is incorporated by reference herein in its entirety for all that it discloses or teaches and for all purposes. The present application also claims the benefit under 35 U.S.C. §119 of U.S. Provisional Patent Application 61/812,824, entitled “Method of Distinguishing Events of Touch and Type Input,” and filed on Apr. 17, 2013, which is also incorporated by reference herein in its entirety for all that it discloses or teaches and for all purposes. The present application also claims the benefit under 35 U.S.C. §119 of U.S. Provisional Patent Application 61/814,176, entitled “Interface That Computes Two-Dimensional Coordinates From Three-Dimensional Input,” and filed on Apr. 19, 2013, which is also incorporated by reference herein in its entirety for all that it discloses or teaches and for all purposes. The present application also claims the benefit under 35 U.S.C. §119 of U.S. Provisional Patent Application 61/828,181, entitled “Interface That Computed Two-Dimensional Coordinates From Three-Dimensional Input,” and filed on May 29, 2013 which is also incorporated by reference herein in its entirety for all that it discloses or teaches and for all purposes. The present application also is a continuation of and claims the benefit of U.S. Non-Provisional patent application Ser. No. 14/153,793, entitled “User Input Determination,” and filed on Jan. 13, 2014, which is also incorporated by reference herein in its entirety for all that it discloses or teaches and for all purposes.
BACKGROUNDOver the years, people have developed different ways of communicating user input commands to computing devices, such as personal computers. Examples of some devices that have evolved over the years are keyboards, mouse pads, and touch pads, as well as software that converts spoken commands into input commands. Nevertheless, there still remains room for improvement in the way that users communicate with and efficiently utilize computing devices.
SUMMARYIn accordance with one implementation, a method is illustrated that allows a computing device to determine a user input. The method includes detecting one or more user input objects in a 3-dimensional field relative to a 2-dimensional surface. The method also includes determining coordinates for the one or more user input objects relative to the 2-dimensional surface. And, the method further includes determining a user input based on the coordinates.
Another implementation discloses an apparatus that determines a user input. The apparatus includes a 2-dimensional surface and an object detection circuit configured to detect one or more user input objects in a 3-dimensional field relative to the 2-dimensional surface. In addition, the object detection circuit is configured to determine coordinates for the one or more user input objects relative to the 2-dimensional surface. Also included is a user input detector configured to determine a user input based on the coordinates.
In another implementation, one or more computer readable media are provided. The computer readable media encode computer-executable instructions for executing on a computer system a computer process. The computer process can include: detecting one or more user input objects in a 3-dimensional field relative to a 2-dimensional surface; determining coordinates for the one or more user input objects relative to the 2-dimensional surface, and determining a user input based on the coordinates.
In one implementation, a device is provided that includes one or more keys; one or more capacitive sensors disposed in the one or more keys; and an object detector configured to detect one or more user input objects in a 3-dimensional field above the one or more keys.
In another implementation, a method is provided that includes receiving sensor data indicative of a touch event on a keyboard; waiting for a predetermined period of time to determine if key press data is received; and signaling a touch event if no key press data is received during the predetermined period of time while alternatively signaling a key press event if key press data is received during the predetermined period of time. The method can be used for example to discriminate between a touch event and a type event on a key surface of a keyed device.
In still another implementation, an apparatus is provided that includes a device that includes one or more keys; a plurality of sensors disposed in the one or more keys; and a user input detector configured to receive sensor data indicative of a touch event on the keyed device. The user input detector is further configured to wait for a predetermined period of time to determine if key press data is received. The user input detector signals a touch event if no key press data is received during the predetermined period of time while alternatively signaling a key press event if key press data is received during the predetermined period of time.
In yet another implementation, one or more computer-readable storage media are provided that encode computer-executable instructions for executing on a computer system a computer process. The process includes receiving sensor data indicative of a touch event on a key; waiting for a predetermined period of time to determine if key press data is received; and signaling a touch event if no key press data is received during the predetermined period of time while alternatively signaling a key press event if key press data is received during the predetermined period of time.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is neither intended to identify key features or essential features of the claimed subject matter nor is it intended to limit the scope of the claimed subject matter. Other features, details, and utilities, of the claimed subject matter will be apparent from the following Detailed Description of various implementations as further illustrated in the accompanying drawings.
Much of the interaction that a computer user has with his or her computer is by typing or keying in information and commands via a keyboard. As a result, the user's hands are often positioned in a typing position. It can be inefficient for the user to have to move a hand to a mouse or touch pad or even move a finger to a touch sensitive surface of a display in order to interface with a graphical user interface on a display.
In accordance with one implementation, a user can efficiently interface with a graphical user interface by using the space above a surface, such as the space above a keyboard surface or above another surface, to signal an input to the graphical user interface. One or more sensors in proximity to the surface detect the position of a user interface object(s) (e.g., a user's finger(s), a stylus, a pen, or some other pointing device) relative to the surface. The positions of the user's fingertip(s), for example, are then utilized to determine an input to the computer. Moreover, the position of the fingertip(s) can be translated from their position(s) relative to the surface to position(s) on the display. Thus, for example, a fingertip detected above a keyboard surface can be shown as a circle on a display. As the user moves his or her fingertip above the surface, the movement is detected and the circle is displayed to move in a corresponding manner on the display surface. As the user moves the finger down to the surface, an actual input similar to a mouse click can be generated as an input.
In
The 2-dimensional surface corresponds with the 3-dimensional field. For example, the 2-dimensional surface can form a reference surface in an x-y plane at the bottom of the 3-dimensional field. In the example of
A variety of object detection circuits can be used to detect the presence of an input object. An object detection circuit can include one or more sensors, an object detector, a coordinate calculator, a processor, and a memory, for example.
A computing device 120 is shown in
For example, display elements can be rendered by scaling the width and height of a rectangular plane of the 3-dimensional field to have the same width-to-height ratio as a display. In this manner, the coordinates of user input objects can be translated to display coordinates for a display element.
A user input detector 126 determines what to do with the data produced by the mapper. In the example shown in
The user input detector can also be used to detect, for example, user commands, gestures, touch events, and type events, as well as multi-finger or single-finger modes. For example, the user input detector can analyze a sequence of fingertip coordinate data and determine a gesture that has been performed by the user. If the gesture matches a predefined gesture, the user input detector can signal a command that corresponds with that predefined gesture. As another example, the user input detector can monitor the coordinate data associated with a fingertip. If the fingertip is placed at the reference surface position for a predefined period of time, the user input detector can signal that a touch event has occurred. Moreover, the user input detector can also signal a change in modes of operation (e.g., from fingertip input to keyboard input) when a fingertip is placed against the reference surface for an extended period of time.
The object detector in the example of
Moreover, the system implemented by the apparatus in
Such a touch event can be used by a user to interact with a graphical user interface. For example, if a graphical user interface is displaying a list of songs for selection, the user can move his or her fingertip relative to a reference surface, e.g., a keyboard, while watching the corresponding display element on the display. As the display element moves over the desired song, the user can touch the reference surface, e.g., the keyboard, to perform a touch event that selects the song.
A touch event is triggered when a user's fingertip(s) (or other input object(s)) are placed in an interactive area. The interactive area could be the surface of a computer keyboard or even just a predetermined level of a 3-dimensional field.
A key-press event is triggered when a key is pressed down. If the key press event is triggered within a short period of time after the triggering of a touch event, type mode can be initiated. Alternatively, touch mode is maintained.
While in touch mode, pressing down of keys by a user need not trigger a key-press event. Thus, accidental pressing down of keys during touch mode for a short period of time will not terminate touch mode and initiate type mode. Rather a key-press event has to be for a sufficient pre-determined duration and with sufficient depression distance to make it clear that a key-press event is intended.
In accordance with one implementation, a method of discriminating between a touch event and a type event can be utilized. The method is initiated by operation 502 and includes reading an input, as shown by operation 504. A decision operation 506 determines whether a touch input has been detected. If the input that has been read does not match a touch input, then a new input reading is made. If the input reading is determined to be a touch input, however, then a timer can be initiated in operation 508. The timer can run for a few milliseconds. Decision operation 510 queries a routine to determine whether a key press event is detected. Decision operation repeats this routine until an interval count is reached. In operation 516, keyboard input(s) are read. In decision operation 518, if no key press is detected by the keyboard input, then the process can be repeated. If a key press is detected, then the interval timer can be stopped as shown by operation 520 and a keyboard event can be signaled as shown by operation 522. Thus, the keyboard input would override a touch event. If the decision operation in decision block 510 expires due to elapsed time or elapsed number of allocated iterations, the timer is stopped as shown by operation 512 and a touch event is signaled as shown by operation 514. After a touch event or type event is signaled, the process can begin again from operation 504. This implementation allows the same keypad, keyboard, or other device with key(s) to serve as both a touch and a type interface. This implementation can be accomplished because the system allows a type input to supersede a touch input when a type input is detected.
The flow chart shown in
The sensor data discussed in
It should also be noted that a change in mode can be implemented by a keyboard shortcut or by a gesture defined by either the user or pre-defined by the system.
Another change in mode that can be implemented is a change from single-finger mode to multi-finger mode. Performing a touch operation with one finger allows a system to determine single-touch gestures, e.g., click, swipe, and drag, that are to be utilized. Performing a touch operation with multiple fingers allows a user to perform multi-touch gestures, such as pinch, multiple-fingers-swipe, and pivotal rotation. Thus, in one implementation, the system provides both a multi-finger mode and a single-finger mode. Multi-finger mode is initiated when coordinates of more than one fingertip (or other user input object) are detected. Single-finger mode is initiated when coordinates of just a single fingertip (or other user input object) are detected. While in multi-finger mode, the hover feature can be disabled. The hover feature can be maintained when the system operates in single-finger mode. Moreover, while in multi-finger mode, a single touch event can be dispatched when a single finger is placed in contact with a touch surface, while a multi-finger touch event is dispatched when multiple fingers are placed in contact with a touch surface.
In order to track the fingertip positions in 3-dimensions, a variety of techniques can be used with a camera sensor. For example, stereoscopic disparity mapping, time-of-flight depth mapping, and structured depth mapping may be used.
For example, with stereoscopic cameras, a determination of the 3D structure of a scene or 3D coordinates of objects in the scene can be made using two or more images of the 3D scene, each acquired from a different viewpoint in space. The images are simultaneously analyzed to calculate disparity (distance between corresponding points when the two images are superimposed) either for every point in an image (a disparity map) or for specific points (e.g., fingertips). In addition to the x and y coordinates, which are readily available from the images, z (or depth) can be calculated by using disparity as a measure of distance away from the cameras (the further an object, the smaller the disparity).
It is not necessary that a keyboard be used as a reference surface. In the computer system 700 of
Other types of sensors besides a camera(s) can be used as well to detect user input objects.
Capacitive sensors can also be used. In one implementation, capacitive sensors can be used within keys of a keyboard. The capacitive sensors can be placed in all keys or a selected group of the keys. As user input objects move above the capacitive sensors, sensor data can be collected. This sensor data may then be analyzed, for example, by a coordinate calculator that determines the coordinates of a user input object(s) relative to the keys.
In one implementation, capacitive sensors can be disposed only in keys that form a layout that is proportional to a display surface. In another implementation, the capacitive sensors can be disposed in a layout that is not proportional to the display surface. The sensors can be used to gather data that is used to compute coordinates of user input items present in proximity to the keyboard.
In another implementation, a capacitive sensor grid layout can be disposed underneath a touch pad. The touch pad can be separate from or integrated with a computer or keyboard. Sensors underneath the touch pad can be used to sense proximity of a user input object relative to the sensor grid. The data may then be used to calculate coordinate data for the user input object(s). Capacitive sensors are available, for example, from Cypress Semiconductor or San Jose, Calif.
A processor, memory, sensor, object detector, and coordinate calculator may be integrated with a keyboard. In such an implementation, the keyboard itself can generate coordinate data for a user input object(s).
Other sensors may be used, as well. For example, ultrasonic sensors can be used as sensors. Moreover, micro-electro-mechanical system (MEMS) devices can be used to fabricate sensors that can be disposed in keys or other elements. Still other types of sensors can be used, as well.
When a keyboard is utilized as a sensor grid, not all of the keys need to contain sensors. However, the resolution of a sensor grid matrix implemented via sensors disposed on a keyboard can be increased by increasing the number of sensors of the sensor grid matrix. Since keys on a standard QWERTY keyboard are not arranged in strict columns so as to correspond with the shape of a display device, the system can be configured to interpolate the interactive area delineated by the sensors to the dimensions of the display device. As one example, the interactive area can be defined to be the largest rectangular region that fits within the bounds of the block of sensor keys—assuming that the display screen is also rectangular.
The dataset captured by sensor(s) can be presented in a graphical format that resembles a traditional heat map. This heat map allows detection of multiple inputs simultaneously. For example, a heat map shows an image of a top-down view of proximity sensor locations within an interactive area defined by a sensor grid. Locations closest to the user input objects, e.g., the user's fingertips, show the reddest hues, for example. Other locations show hues that fade into bluer colors where the sensor data is less pronounced. The positions of multiple fingers can be computed either directly or from raw data using simple statistical techniques. The positions can also be computed from the heat map using computer vision techniques, such as blob detection. These techniques can be implemented by the coordinate calculator of the system.
In accordance with one implementation, a system is calibrated by a particular user. In this example implementation, the user places his or her fingertips or other objects on a specified number of points in a 3-dimensional field. For example, the user can touch four points on a reference surface. According to the user's preference, parameters including but not limited to sizes, shapes, colors, and transparency levels of display elements, such as touch cursors, can be selected. During use of the system these parameters are used to indicate proximities and/or positions of fingertips or objects relative to a reference surface.
While a diameter of a circle is used as the varying parameter of the display element in
In
Another implementation of determining a user input is illustrated by
In operation 1406, the coordinates corresponding to the position of at least one of the user input objects is mapped to a display surface. For example, x, y, and z coordinates of a user input object can be mapped to a position for display on a display screen. Moreover, the z coordinate can be used to select the size of a display element to use at the calculated position on the display.
Once coordinates are determined for a user input object, various modes of operation can be determined. For example, operation 1408 shows that the coordinates can be used to determine whether a hover event is taking place. A hover event would be determined if the user interface object is located above a reference surface but not touching the reference surface.
When one or more user interface objects are detected, the system can focus on a particular user input object. For example, when multiple fingertips are detected, the system can disregard some of the user input objects and focus on just one. This is illustrated by operation 1410.
Operation 1412 shows that a touch event can also be detected. A touch event can be detected when a user input object is detected to be at a touch surface for a predetermined period of time. The touch surface can coincide with a physical surface, such as a touchpad or keypad. Alternatively, if no physical input device is available, the touch event can be determined by the user input being detected at an inert surface, such as a table top. The touch event can even be determined by the user input object being present at a predetermined position in space for a predetermined amount of time.
When a touch event is detected, the system can turn off hover mode and input a command indicated by the touch event. This feature is illustrated by operation 1414.
The coordinates of a user input object can also be used to signal a user input. For example, if the system is in hover mode, a display element corresponding to a user input object can be displayed on the display surface. This is illustrated by operation 1418.
In operation 1420, a characteristic of a display element can be varied based on the proximity of a user input object to the 2-dimensional surface. For example, operation 1422 shows that the diameter of a circle used as the display element can be varied depending on how proximate the user input object is to the 2-dimensional surface.
One aspect of the user input determination system is that user inputs can be communicated without a user having to touch a device. Operation 1424 illustrates that a command can be determined from a user without the user touching a keyboard, a mouse, a touchpad, a display, or other physical device in order to issue a command. For example, 3-dimensional gestures can be determined from the coordinates that a user input object moves through during a predetermined period of time. This is illustrated by operation 1426.
As shown in
Many other devices or subsystems (not shown) can be connected in a similar manner. Also, it is not necessary for all of the devices shown in
In the above description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the implementations described. It will be apparent, however, to one skilled in the art that these implementations can be practiced without some of these specific details. For example, while various features are ascribed to particular implementations, it should be appreciated that the features described with respect to one implementation can be incorporated with other implementations as well. By the same token, however, no single feature or features of any described implementation should be considered essential, as other implementations can omit such features.
In the interest of clarity, not all of the routine functions of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application—and business-related constraints, and that those specific goals will vary from one implementation to another and from one developer to another.
According to one implementation, the components, process steps, and/or data structures disclosed herein can be implemented using various types of operating systems (OS), computing platforms, firmware, computer programs, computer languages, and/or general-purpose machines. The method can be run as a programmed process running on processing circuitry. The processing circuitry can take the form of numerous combinations of processors and operating systems, connections and networks, data stores, or a stand-alone device. The process can be implemented as instructions executed by such hardware, hardware alone, or a combination thereof. The software can be stored on a program storage device readable by a machine.
According to one implementation, the components, processes and/or data structures can be implemented using machine language, assembler, C or C++, Java and/or other high level language programs running on a data processing computer such as a personal computer, workstation computer, mainframe computer, or high performance server running an OS such as Solaris® available from Sun Microsystems, Inc. of Santa Clara, Calif., Windows 8, Windows 7, Windows Vista™, Windows NT®, Windows XP PRO, and Windows® 2000, available from Microsoft Corporation of Redmond, Wash., Apple OS X-based systems, available from Apple Inc. of Cupertino, Calif., BlackBerry OS, available from Blackberry Inc. of Waterloo, Ontario, Android, available from Google Inc. of Mountain View, Calif. or various versions of the Unix operating system such as Linux available from a number of vendors. The method can also be implemented on a multiple-processor system, or in a computing environment including various peripherals such as input devices, output devices, displays, pointing devices, memories, storage devices, media interfaces for transferring data to and from the processor(s), and the like. In addition, such a computer system or computing environment can be networked locally, or over the Internet or other networks. Different implementations can be used and can include other types of operating systems, computing platforms, computer programs, firmware, computer languages and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, can also be used without departing from the scope and spirit of the inventive concepts disclosed herein.
The above specification, examples, and data provide a complete description of the structure and use of exemplary implementations. Since many implementations can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended. Furthermore, structural features of the different implementations can be combined in yet another implementation without departing from the recited claims.
Claims
1. An apparatus comprising:
- a device comprising one or more keys;
- one or more capacitive sensors disposed in the one or more keys;
- an object detector configured to detect one or more user input objects in a 3-dimensional field above the one or more keys.
2. The apparatus of claim 1 wherein the object detector is further configured to determine coordinates of one or more input objects relative to the one or more keys.
3. The apparatus of claim 2 and further comprising:
- a user input detector configured to determine a user input based on the coordinates.
4. The apparatus of claim 3 and further comprising:
- a processor.
5. The apparatus of claim 2 wherein the device comprising one or more keys, the one or more capacitive sensors, the object detector, and the processor are integrated as a single device.
6. The apparatus of claim 1 wherein the object detector is configured to track the positions of one or more user input objects in the 3-dimensional field.
7. A method of differentiating between a touch event and a type event, the method comprising:
- receiving sensor data indicative of a touch event on a key;
- waiting for a predetermined period of time to determine if key press data is received;
- signaling a touch event if no key press data is received during the predetermined period of time while alternatively signaling a key press event if key press data is received during the predetermined period of time.
8. The method of claim 7 and further comprising:
- placing a device in type mode in response to signaling of the key press event.
9. The method of claim 7 and further comprising:
- placing a device in touch mode in response to signaling of the touch event.
10. An apparatus comprising:
- a device comprising one or more keys;
- one or more of sensors disposed in the one or more keys;
- a user input detector configured to receive sensor data indicative of a touch event on the keyed device;
- wherein the user input detector is further configured to wait for a predetermined period of time to determine if key press data is received; and
- wherein the user input detector is configured to signal a touch event if no key press data is received during the predetermined period of time while alternatively signaling a key press event if key press data is received during the predetermined period of time.
11. The apparatus of claim 10 and further comprising a user input detector configured to place a device in type mode in response to signaling of the key press event.
12. The apparatus of claim 10 and further comprising a user input detector configured to place a device in touch mode in response to signaling of the touch event.
13. One or more computer-readable storage media encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising:
- receiving sensor data indicative of a touch event on a key;
- waiting for a predetermined period of time to determine if key press data is received;
- signaling a touch event if no key press data is received during the predetermined period of time while alternatively signaling a key press event if key press data is received during the predetermined period of time.
14. The one or more computer-readable storage media of claim 13 wherein the computer process further comprises:
- placing a device in type mode in response to signaling of the key press event.
15. The one or more computer-readable storage media of claim 13 wherein the computer process further comprises:
- placing a device in touch mode in response to signaling of the touch event.
Type: Application
Filed: Mar 14, 2014
Publication Date: Oct 23, 2014
Inventors: Lai Xue (Shanghai), Darren Lim (Singapore)
Application Number: 14/213,796
International Classification: G06F 3/0481 (20060101); G06F 3/0488 (20060101);