Simple Motion Based Input System
One embodiment a programmable device embodying a program of executable instructions to perform steps including assigning multiple tasks or symbols to each of a number of motion groups; segmenting motion data from sensor(s); matching the segments to motion groups; composing and then selecting task(s) or symbol sequence(s) from the task(s) and/or symbol(s) assigned to the matched motion groups.
This application claims the benefit of provisional patent application Ser. No. 60/920,525, filed 2007 Mar. 28 by the present inventor.
FEDERALLY SPONSORED RESEARCHNot Applicable
SEQUENCE LISTING FOR PROGRAMNot Applicable
BACKGROUND1. Field of Invention
This invention relates generally to the field of human interfaces for instruments and devices. More particularly, certain embodiments consistent with this invention relate to systems and methods for entering text, command and other messages.
2. Prior Art
A desktop computing system usually offers two basic input devices: the keyboard and the mouse. Text and command input is provided through the keyboard, while pointing (moving pointer, selecting) as well as managing UI components (resizing windows, scrolling, menu selection, etc.) are executed with the mouse. There is also some redundancy, as the keyboard can also control navigation with arrow keys and UI components with shortcut keys. However, due to space limitation and mobility requirements, the desktop input method and user experience are difficult to duplicate on off-desktop devices.
Handheld devices such as PDAs and two-way pagers primarily use on-screen “soft” keyboards, handwriting recognition, tiny physical keyboards used with the thumbs, or special gestural alphabets such as Graffiti from Palm, Inc. or Jot from Communication Intelligence Corporation (CIC). Mobile phones primarily use multiple taps on the standard 12-key number pad, possibly combined with a prediction technique such as T9. Game controllers primarily use a joystick to iterate through characters, or other methods to select letters from a keyboard displayed on the television screen.
On-screen “soft” keyboards are generally small and the keys can be difficult to hit. Even at reduced size, they consume precious screen space. Tapping on flat screen gives very little tactile feedback. Some on-screen keyboards such as Messagease and T-Cube let user use sliding motion instead of tapping for letters. Sliding motion gives an user more tactile feedback on a touch sensitive surface. However, like other on-screen keyboards, users are bound by the precise layout of on-screen keys. It requires placing finger or stylus accurately in fairly small starting cells before sliding. This type on-screen keyboards as well as the more conventional tapping only on-screen keyboards require the user to focus attention on the keyboard rather than on the output, resulting in errors and slow-downs. It is particularly problematic in ‘heads-up’ writing situations, such as when transcribing text or taking notes while visually observing events, etc. For such situations, it is important to achieve as much scale and location Independence as possible for the ease and speed of input.
Some PDA devices use alphabet character based handwriting recognition, such as Graffiti and Jot. The alphabet used can be either natural or artificially modified for reliable recognition [Goldberg, D., & Richardson, C. (1993). Touching-typing with a stylus. Proc. INTERCHI, ACM Conference on Human Factors in Computing Systems, 80-87.]. EdgeWrite defines an alphabet around the edge of a fixture to help users with motor impairment [Wobbrock, J. O., Myers, B. A., & Kembel, J. (2003). A High-Accuracy Stylus Text Entry Method. Proc. ACM Symposium on User Interface Software and Technology, UIST'03 (CHI Letters), 61-70.]. Such systems take small amount of space. The fundamental weakness of handwriting based approach, however, is the limited speed, typically estimated around 15 wpm [Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale, N.J.: Lawrence Erlbaum Associates Publishers.]. For Graffiti and Jot, tests has shown between 4.3-7.7 wpm performance for new users and 14-18 wpm for more advanced users [Sears, A., & Arora, R. (2002). Data entry for mobile devices: An empirical comparison of novice performance with Jot and Graffiti. Interacting with Computers, 14(5), 413-433.]. Also, these writing systems generally take a lot practice to achieve sufficient accuracy.
In contrast to Unistrokes, continuous gesture techniques do not require separation between characters, which can improve the speed of input. One example is described in U.S. Pat. No. 6,031,525, February, 2000, Perlin. An more recent development is described in U.S. Pat. No. 7,251,367 B2, July, 2007. These methods use as much screen space as on-screen keyboards and require either constant visual attention or extensive training.
SUMMARYIn accordance with one embodiment a programmable device embodying a program of executable instructions to perform steps including assigning multiple tasks or symbols to each of a number of motion groups; segmenting motion data from sensor(s); matching the segments to motion groups; composing and selecting task(s) or symbol sequence(s) from the task(s) and/or symbol(s) assigned to the matched motion groups.
The various features of the present invention and the manner of attaining them will be described in greater detail with reference to the following description, claims, and drawings, wherein reference numerals are reused, where appropriate, to indicate a correspondence between the referenced items, and wherein
The table 201 in
The slide movement can be detected with various types of switches and/or sensors.
Such linear touch sensor can detect the position of contact point (by finger, stylus or other objects) on the sensor line. Such sensors are generally thin and almost flat. Also, since such sensor does not use an knob, it is no longer necessary to reset knob position. That enables such sensor to be used to produce one signal for movement towards one end and an different signal for movement towards the other end.
Similar to device in
The symbol tables 502 inform user about the symbol mapping which is the same as the first four rows in
In a regular virtual keyboard, each symbol cell has to be large enough to allow a finger or stylus to tap accurately. In this device, such constraints become unnecessary, since input area is independent of the symbol tables. user can use the entire screen for slide/stroke input. Of course, for convenience, especially for novice users, the cells of the symbol tables can be made tappable the same way as regular virtual keyboard. Thus, tapping on cell ‘a’ would input letter ‘a’. Tapping on the center cell would shift the input mode. The symbol tables will in turn update accordingly.
The center mark 503, a ‘+’ shaped sign at the center of the display, is to mark the boundaries of the 4 sections. An user can use it a guide to place slides/strokes in intended sections. Both the symbol tables 502 and center mark 503 can be displayed non intrusively as semi-transparent overlay or underlay. Moreover, both can be optional. For experienced users who have memorized the symbol tables, it is no longer necessary to display the tables 502 thus free up more space for other contents. Since the tables have less cells than a multiplication table, it would be reasonable to expect sizable portion of the users can do regular text input without the symbol tables. User can also shrink or expand or minimize the symbol tables 502 by moving its borders the same way as one resizes virtual windows in graphics user interface such as Windows XP.
In the input systems presented so far, each letter is mapped to a slide movement, graphically a straight line segment. With such mapping for its letters, a word can then be mapped by an ordered list of slides or line segments. By joining the mapped line segments end to end, a word can be mapped to a polyline and then to a continuous stroke. This mapping scheme leads naturally to shorthands for words and word fragments.
Without section or other constraints, different letters can be mapped to same slide or line segment. Therefore, different words may match to same stroke. To resolve such ambiguity, the system lists matching (sometimes near matching) words, to allow user to select the intended word by tapping on it or selecting through other means. The example stroke trace 504 in
On multi-touch capable touchpads and touchscreens, which is capable of tracking multiple contact points, an user can move with different number of fingers (or styli) to unambiguous select intended symbols, thus achieve location independence. An user can also differentiate the intent moving fingers (or styli) in different formation.
The device 501 illustrated in
When the points in a stroke fits a straight line statistically, the stroke would be classified as a Slide at 802. At 803, the location of the center and the direction of the slide are determined. Such properties can calculated with standard statistical method, such as linear regression. Optionally, those properties can be calculated in 802 when the data is tested for straightness. Based on the nearest cardinal or ordinal direction, the slide can then be classified into one of the eight directional groups, namely, northwest, north, northeast, west, east, southwest, south, and southeast. The input area can be divided into four sections. The slide is associated with one of the sections based on the location of its center. In 804, a letter/symbol or command/task is selected based on the classification and properties of the slide. As indicated by the columns in the table 201 in
For devices capable of distinguishing fingers, finger (or object) identity can be used in place of section association. In that case, the symbol(s) or task(s) can be selected using table similar to the table in
If a stroke contains direction changes with each segment fits a straight line, the stroke would be classified as a Polyline at 806. The test can be done using statistical methods such as segmented linear regression. In 807, the stroke is divided into segments. The direction of each segment can be calculated using regular statistical method such as linear regression. It is possible to perform such calculations in the test at 806. The process also check the length of each segment, and drop the segments that are too short. Each segment is then classified into one of the eight directional group based on its direction. In 808, following the column wise mapping shown by table 201 in
If a stroke is matched to a circle at 810, the process then move on to determine the center and direction (clockwise or counterclock) of the circle at 811. The circle is then associated to sections based on the location of its center. Using the table in
If a stroke cannot be classified as either slide or polyline or circle, the process can try to match it against other gesture at 813 or optionally send it to other process. The process returns to check for new stroke at 814.
CONCLUSION, RAMIFICATIONS AND SCOPEAccordingly, the reader will see that the systems and methods described in various embodiments offers many advantages. It is easy to learn as consistent muscle movements are utilized. Users are much more likely move up from letter level input to word level shorthands. It provides smoother path for user to achieve reliable eye-free operation. Still further objects and advantages will become apparent from a consideration of the detailed description and drawings.
Although the description above contains many specificities, these should not be construed as limiting the scope of the embodiment but as merely providing illustrations of some of the presently preferred embodiments. For example, the motion or movement data can be collected with other type sensors, such as joy sticks or motion sensor attached to finger(s) or styli. Also, video camera can be employed to collect motion data. Movement can then be detected through image analysis.
Claims
1. A method for selecting tasks or symbols, comprising:
- classifying motions into groups;
- assigning a plurality of tasks or symbols to each motion group;
- receiving motion data from a sensor or a group of sensors;
- matching the received motion data to one of the motion group;
- selecting task(s) or symbol(s) from the tasks or symbols assigned to the motion group.
2. The method of claim 1, wherein the motion groups include linear motions grouped by direction, and optionally circular motions grouped by direction of rotation.
3. The method of claim 1, further comprising:
- dividing the space into a plurality of sections;
- assigning a plurality of tasks or symbols to each of the sections;
- map the movement to one of the sections using the position of either the start point, the end point or the center of the movement;
- selecting the task(s) or symbol(s) assigned to the motion group matched as well as the section mapped to the movement.
4. The method of claim 2, further comprising:
- dividing the space into a plurality of sections;
- assigning a plurality of tasks or symbols to each of the sections;
- map the movement to one of the sections using the position of either the start point, the end point or the center of the movement;
- selecting the task(s) or symbol(s) assigned to the motion group matched as well as the section mapped to the movement.
5. The method of claim 1, further comprising:
- assigning a plurality of tasks to each of a plurality of movable objects;
- receiving physical feature(s) for the object in motion from a sensor or a group of sensors;
- identifying the object involved in motion based on the received feature(s);
- selecting the tasks assigned to the motion group matched as well as the object identified.
6. The method of claim 5, wherein the movable objects are identified by fingerprints and/or surface features.
7. The method of claim 2, further comprising:
- assigning a plurality of tasks or symbol(s) to each of a plurality of movable objects;
- receiving physical feature(s) for the object in motion from a sensor or a group of sensors;
- identifying the object involved in motion based on the received feature(s);
- selecting the tasks assigned to the motion group matched as well as the object identified.
8. The method of claim 7, wherein the movable objects are identified by fingerprints and/or surface features.
9. The method of claim 1, further comprising:
- tracking the motion data of a plurality of objects;
- further selecting the task(s) or symbol(s) according to the relative positions of the objects during the movement.
10. The method of claim 2, further comprising:
- tracking the motion data of a plurality of objects;
- further selecting the task(s) or symbol(s) according to the relative positions of the objects during the movement.
11. A programmable device tangibly embodying a program of executable instructions to perform method steps for selecting tasks or symbols, comprising:
- classifying motions into groups;
- assigning a plurality of tasks or symbols to each motion group;
- receiving motion data from a sensor or a group of sensors;
- matching the received motion data to one of the motion groups;
- selecting task(s) or symbol(s) from the tasks or symbols assigned to the motion group.
12. The device of claim 11, wherein the motion groups include linear motions grouped by direction, and optionally circular motions grouped by direction of rotation.
13. The device of claim 12, wherein the program embodied further comprising executable instructions to perform method steps comprising:
- dividing the space into a plurality of sections;
- assigning a plurality of tasks or symbols to each of the sections;
- map the movement to one of the sections using the position of either the start point, the end point or the center of the movement;
- selecting the task(s) or symbol(s) assigned to the motion group matched as well as the section mapped to the movement.
14. The device of claim 12, wherein the program embodied further comprising executable instructions to perform method steps comprising:
- assigning a plurality of tasks or symbols to each of a plurality of movable objects;
- receiving physical feature(s) for the object in motion from a sensor or a group of sensors;
- identifying the object involved in motion based on the received feature(s);
- selecting the task(s) or symbol(s) assigned to the motion group matched as well as the object identified.
15. The device of claim 14, wherein the movable objects are identified by fingerprints and/or surface features.
16. The device of claim 12, wherein the program embodied further comprising executable instructions to perform method steps comprising:
- tracking the motion data of a plurality of objects;
- further selecting the task(s) or symbol(s) according to the relative positions of the objects during the movement.
17. A method for selecting words or symbol sequences, comprising:
- classifying motions into groups by direction;
- assigning a plurality of symbols to each motion group;
- receiving motion data from a sensor or a group of sensors;
- dividing the motion data into segments at the points of direction changes;
- matching each segment to one of the motion group;
- composing symbol sequences using the symbols assigned to the matched motion groups and the order of the segments;
- selecting words or symbol sequences from the composed symbol sequences.
18. The method of claim 17, wherein the composed symbol sequences are ordered by frequency of occurrence.
19. A programmable device tangibly embodying a program of executable instructions to perform method steps for selecting words or symbol sequences, comprising:
- classifying motions into groups by direction;
- assigning a plurality of symbols to each motion group;
- receiving motion data from a sensor or a group of sensors;
- dividing the motion data into segments at the points of direction changes;
- matching each segment to one of the motion group;
- composing symbol sequences using the symbols assigned to the matched motion groups and the order of the segments;
- selecting words or sequences from the composed symbol sequences.
20. The device of claim 19, wherein the program embodied further comprising executable instructions to order the composed symbol sequences according to frequency of occurrence.
Type: Application
Filed: Mar 29, 2008
Publication Date: Oct 1, 2009
Inventor: Thomas Zhiwei Tang (El Cerrito, CA)
Application Number: 12/058,665