Method and system for providing multiple input connecting user interface
A plurality of elements is identified by a user for input to the computing system by at least one of sequential and simultaneous selection of elements presented on a display of a computing system. Based on the elements thus identified, which may include at least one operator and at least one operand, the computing system recognizes a function to be performed by the computing system and causes a visual indication to be presented on the display. The computing system thus may present a visual indication showing a connection between, for example, the operator and an operand, or an operand and a functional area of the display associated with the function to be performed as a result of the elements identified. The visual indication thus confirms to the user the elements that the user has identified, and the resulting function that will be initiated.
Latest Microsoft Patents:
The present invention generally pertains to a user interface for a computing device, and more specifically, to providing feedback to a user confirming the result of user actions.
BACKGROUND OF THE INVENTIONBecause of the widespread popularity of computers, most people have become comfortable with conventional computer input devices such as keyboards and pointing devices. The keystrokes on a keyboard, and the cursor movement and control provided by mice, trackballs, and joysticks are sufficiently intuitive to provide satisfactory interfaces for most computer-related tasks. Nonetheless, as computers become increasingly more indispensable, limits of a human-machine interface that depends upon pressing buttons and dragging a pointer with a mouse or other device tends to restrict how quickly and naturally computers can be used.
In seeking to further enhance the human-machine interface, ever-improving hardware capabilities have made possible systems that obviate the need to enter text with a keyboard. Personal digital assistants and tablet PCs can now recognize handwriting. Speech recognition software also enables users to operate computers and enter text by simply speaking into a microphone. Such systems can thus provide a more efficient and satisfying experience for users who prefer not to type on a keyboard or are less proficient in doing so, or who don't want to manipulate a pointing device.
As computers become even more commonplace throughout our environment, the desire to make computers and their interfaces even more user-friendly continues to promote development in this area. For example, the MIT Media Lab, as reported by Brygg Ullmer and Hiroshi Ishii in “The metaDESK: Models and Prototypes for Tangible User Interfaces,” Proceedings of UIST 10/1997:14 17,” has developed another form of “keyboardless” human machine interface. The metaDESK includes a generally planar graphical surface that not only displays computing system text and graphic output, but also receives user input by responding to an object placed against the graphical surface. The combined object responsive and display capability of the graphical surface of the metaDESK is facilitated using infrared (IR) lamps, an IR camera, a video camera, a video projector, and mirrors disposed beneath the surface of the metaDESK. The mirrors reflect the graphical image projected by the projector onto the underside of the graphical display surface to provide images that are visible to a user from above the graphical display surface. The IR camera can detect IR reflections from the undersurface of an object placed on the graphical surface, to detect the object and its disposition.
Others have been developing similar keyboardless interfaces. For example, papers published by Jun Rekimoto of the Sony Computer Science Laboratory, Inc. and associates describe a “HoloWall” and a “HoloTable” that display images on a surface and use IR light to detect objects positioned adjacent to the surface.
By detecting a specially configured object or by detecting IR reflected light from an object disposed on a graphical display surface, the metaDESK can respond to the contemporaneous placement and movement of the object on the display surface to carry out a predefined function, such as displaying and moving a map of the MIT campus. Thus, computing systems such as the HoloWall and metaDESK may provide a more natural degree of human machine interaction by providing the means for a computer to respond to specific objects.
By facilitating a more natural input arising from the person's interaction with a graphical display, such technologies not only broaden the manner in which a user might provide tactile input to a computing system, but also generally extend the ability of a user to provide input to a computing system. In conventional computing systems, a user provides only a single input to the computing system at one time with a pointing device or a keyboard. Using a graphical user interface, for example, a user might be able to select a number of documents to move into a folder or attach to an e-mail at a single time. But first, the user must select those items, and then associate them with the folder or e-mail by dragging or performing one or more mouse clicks. Single commands are made, sequentially mirroring the interaction of the human operator: with one pointing device, a user can perform one action at a time, therefore, modes of computing system input are designed to receive only one input at a time.
There is a degree of comfort with serial, single-input systems. When input is provided serially, a user is compelled by the serial input paradigm to perform actions in a logical, or at least a mandated sequence. In the example described above, to add a document as an attachment to an e-mail, the user first would identify the document to be attached, and then perform a second input to actually attach it to the e-mail. The serial input structure's limit of eliciting one input at a time inherently simplifies the process for the user so that the user does not become confused as to the nature or consequences of the actions being performed.
By contrast, an interactive display surface potentially enables a user—or multiple users—to simultaneously enter multiple inputs to the computing system. For example, a user might perform input by using both hands at the same time, or by positioning multiple physical objects adjacent to the interactive display surface. Examples of other methods and systems for detecting and responding to multiple objects placed on or near a graphical surface at one time are described in connection with other inventions owned by the assignee of the present application. These inventions are described in co-pending U.S. patent application Ser. No. 10/879,872, entitled “Using Size And Shape Of A Physical Object To Manipulate Output In An Interactive Display Application,” filed on Jun. 28, 2004, and application Ser. No. 10/883,515, entitled “Using Physical Objects To Adjust Attributes Of An Interactive Display Application,” filed on Jun. 30, 2004.
A possible concern with enabling one or more users to enter simultaneous multiple inputs to a computing system is the need to provide an indication to a user of choices made and the functions thus initiated. In other contexts where a person operates a device by providing multiple simultaneous inputs, it is common to receive visual, audible, or tactile feedback that indicates the simultaneous inputs are correct. For example, when a person playing a piano presses the intended keys at the same time, the resulting sound will indicate that the actions were correct. Similarly, a user operating an automobile with a manual transmission who fails to depress the clutch fully when shifting gears will receive multiple forms of audible and tactile feedback indicating that the user has failed to properly shift the transmission.
The ability to provide multiple simultaneous user inputs to a computing system does not necessarily provide such an unambiguous confirmation and feedback. For example, if a user were permitted to delete files by simultaneously pressing a delete key and a key corresponding to the first letter of a file name, the user might unknowingly and/or unintentionally delete a number of files by failing to change one of the inputs in time to carry out the intended action or by providing the wrong input. A user might be able to check that the desired files were indeed deleted, by opening a “recycle bin” or “deleted items” folder to see that the intended files were deleted. The user may choose not to bother with the additional step of having to check the contents of such a folder, and thus inadvertently destroy files the user intended to keep. Alternatively, the user may slow the pace for providing inputs to the system to make sure that such errors do not occur. In any case, being able to make multiple inputs at one time may result in only a slight time savings compared to entering inputs one at a time, or if a mistake is made, making multiple incorrect simultaneous inputs may result in more time being consumed than carrying out the action with more traditional sequential inputs.
It would therefore be desirable to provide confirmation to a user of actions that initiate specific functions. In particular, it would be desirable to provide such confirmation with an interactive display system or other computing system configured to permit multiple, simultaneous inputs, to enable a user to interact with the system more quickly and naturally. Ideally, the user should be provided such indicia without performing extra steps or having to slow the interaction to avoid mistakes that might otherwise arise when using a system able to simultaneously process multiple user inputs.
SUMMARY OF THE INVENTIONOne of the advantages of the present invention is that it facilitates an intuitive, user-friendly method for initiating a function on a computing system. Using an embodiment of the present invention, a user specifies at least two objects presented on a display screen that will cause a desired function to be initiated, and the computing system will present a visual indication confirming the user's selection. The visual indication is presented as a connection joining one or more of the elements to another object visually displayed on the screen associated with the selected function. For example, if the application is a music player application, a user may select an album cover or other visual element representing a collection of songs and, simultaneously with, just before, or just after indicating the album cover, select an operator such as an “add songs to playlist button.” A visual indication in the form of a connector, such as a line, lighting bolt, or some other graphic indicia or animation, visually connects the album cover with the add button or the playlist to confirm the user's selection. As a result, when using an interactive display surface that can be engaged with the user's hands or other physical objects, an interface that may permit multiple simultaneous inputs, or another interface, the user receives visual confirmation of the user's actions.
One aspect of the present invention is thus directed to a method for providing visual feedback of selection of a plurality of elements presented on a display associated with a computing system. Identifications for the plurality of elements presented on the display are received. A function performable by the computing system as a result of a combination of the elements identified is recognized, and a function is performed between at least one of the elements identified and a portion of the display associated with the function that is recognized.
A visual option indicator is presented on the display upon detecting a first identification indicating an option for a next identification. The visual option indicator may include, for example, visually highlighting a choice of available operands when an operator is selected with the first identification. Alternatively, the visual option indicator may include visually highlighting a choice of available operators when at least one operand is selected with the first identification.
A visual indication may be presented indicating a connection between one of each of the plurality of elements identified and at least one of the plurality of elements that were identified and a portion of the display associated with the function that was recognized. The visual indication presented may be varied according to the function recognized. The identifications for the plurality of elements presented on the display may be received substantially simultaneously. At least one of the identifications for one of the plurality of elements is performed by detecting a physical object disposed adjacent the display approximately at a location where one of the plurality of elements is displayed. The physical object may include at least one of an inanimate object and a part of a user's body. When a plurality of identifications are made, the identifications may be made by a combination of physical objects including a part of a first user's body, a part of a second user's body, an inanimate object manipulated by a first user, and an inanimate object manipulated by a second user.
While a first identification of one of the combination of elements is detected as being maintained, a second identification of another of the combination of elements may be made. The function recognized may vary, based on a time interval during which the physical object is disposed adjacent to the display at the location where one of the plurality of elements is displayed. Of the combination of elements, the element for which the identification is first received may cause at least one of the function recognized and the visual indication presented to be changed.
The combination of elements may include at least one operator and at least one operand, such that identifying the operator and the at least one operand causes a predefined function represented by the operator to be performed on the at least one operand. A plurality of operands may be identified when a first operand is identified and, while an action causing the first operand to be identified continues, an additional operand is identified. The action causing the first operand to be identified includes pointing to the first operand and the action continues while the pointing action is dragged to include the additional operand.
Embodiments of the present invention may be applied to a media player application wherein the operator represents a playback function and the at least one operand includes a representation of media recording. The playback function may include at least one of an add selection to playlist function, a delete selection from playlist function, and a play function. In the media player application, the portion of the display associated with the function recognized may include one of a displayed playlist and the operator representing the playback function.
Alternatively, embodiments of the present invention may be applied to a document processing application, wherein the operator represents a document processing function and the at least one operand represents a document. The document processing function may include at least one of an attach to e-mail function, a delete function, an open function, a save function, a copy function, and a move function. As a further alternative, the application may include a gaming application wherein the at least one operand represents a game element and the operator represents a game function. Further, an embodiment of the present invention may present a help facility wherein the function recognized is providing instruction to a user regarding operation of the computing system, and the visual indication illustrates consequences of a particular user action . . .
According to embodiments of the present invention, the visual indication presented on the display indicating the connection may include at least one of a linear connection and an animation. Embodiments of the present invention also may include a memory medium on which are stored machine instructions for carrying out the steps of this method.
BRIEF DESCRIPTION OF THE DRAWING FIGURESThe foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
Exemplary Computing System for Implementing Present Invention
With reference to
A number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24, or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information in PC 20 and provide control input through input devices, such as a keyboard 40 and a pointing device 42. Pointing device 42 may include a mouse, stylus, wireless remote control, or other pointer, but in connection with the present invention, such conventional pointing devices may be omitted, since the user can employ the interactive display for input and control. As used hereinafter, the term “mouse” is intended to encompass virtually any pointing device that is useful for controlling the position of a cursor on the screen. Other input devices (not shown) may include a microphone, joystick, haptic joystick, yoke, foot pedals, game pad, satellite dish, scanner, or the like. Also, PC 20 may include a Bluetooth radio or other wireless interface for communication with various types of interface device, such as printers, or the interactive display table of the present invention. These and other input/output (I/O) devices are often connected to processing unit 21 through an I/O device interface 46 that is coupled to the system bus 23. The term I/O interface is intended to encompass each interface specifically used for a serial port, a parallel port, a game port, a keyboard port, and/or a universal serial bus (USB). System bus 23 is also connected to a camera interface 59, which is coupled to an interactive display 60 to receive signals form a digital video camera that is included therein, as discussed below. The digital video camera may be instead coupled to an appropriate serial I/O port, such as to a USB version 2.0 port. An optional monitor 47 can be connected to system bus 23 via an appropriate interface, such as a video adapter 48; however, the interactive display table of the present invention can provide a much richer display and interact with the user for input of information and control of software applications and is therefore preferably coupled to the video adaptor. It will be appreciated that PCs are often coupled to other peripheral output devices (not shown), such as speakers (through a sound card or other audio interface—not shown) and printers.
The present invention may be practiced on a single machine, or on multiple computing devices. For example, PC 20 can also operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49. Remote computer 49 may be another PC, a server (which is typically generally configured much like PC 20), a router, a network PC, a peer device, or a satellite or other common network node, and typically includes many or all of the elements described above in connection with PC 20, although only an external memory storage device 50 has been illustrated in
When used in a LAN networking environment, PC 20 is connected to LAN 51 through a network interface or adapter 53. When used in a WAN networking environment, PC 20 typically includes a modem 54, or other means such as a cable modem, Digital Subscriber Line (DSL) interface, or an Integrated Service Digital Network (ISDN) interface for establishing communications over WAN 52, such as the Internet. Modem 54, which may be internal or external, is connected to the system bus 23 or coupled to the bus via I/O device interface 46, i.e., through a serial port. In a networked environment, program modules, or portions thereof, used by PC 20 may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used, such as wireless communication and wide band network links.
Exemplary Interactive Surface
In
IR light sources 66 preferably comprise a plurality of IR light emitting diodes (LEDs) and are mounted on the interior side of frame 62 in this exemplary embodiment, but can alternatively be mounted at other locations on the interactive display table. The IR light that is produced by IR light sources 66 is directed upwardly toward the underside of display surface 64a, as indicated by dashed lines 78a, 78b, and 78c. The IR light from IR light sources 66 is reflected from any objects that are atop or proximate to the display surface after passing through a translucent layer 64b of the table, comprising a sheet of vellum or other suitable translucent material with light diffusing properties. As used herein and in the claims that follow in connection with objects positioned on or proximate to the interactive display surface, the term “adjacent to” is used with the intention that this term encompass both an object that is actually touching the interactive display surface as well as one that is just above the interactive display surface. Although only one IR light source 66 is shown, it will be appreciated that a plurality of such IR light sources may be mounted at spaced-apart locations around the interior sides of frame 62 to prove an even illumination of display surface 64a. The IR light produced by the IR light sources may:
-
- exit through the table surface without illuminating any objects, as indicated by dash line 78a;
- illuminate objects on the table surface, as indicated by dash line 78b; or
- illuminate objects a short distance above the table surface but not touching the table surface, as indicated by dash line 78c.
Objects above display surface 64a include a “touch” object 76a that rests atop the display surface and a “hover” object 76b that is close to but not in actual contact with the display surface. Thus, both touch and hover objects are “adjacent to” the display surface, as that term is used herein. As a result of using translucent layer 64b under the display surface to diffuse the IR light passing through the display surface, as an object approaches the top of display surface 64a, the amount of IR light that is reflected by the object increases to a maximum level that is achieved when the object is actually in contact with the display surface.
A digital video camera 68 is mounted to frame 62 below display surface 64a in a position appropriate to receive IR light that is reflected from any touch object or hover object disposed above display surface 64a. Digital video camera 68 is equipped with an IR pass filter 86a that transmits only IR light and blocks ambient visible light traveling through display surface 64a along dashed line 84a. A baffle 79 is disposed between IR light source 66 and the digital video camera to prevent IR light that is directly emitted from IR light source 66 from entering the digital video camera, since it is preferable that this digital video camera should produce an output signal that is only responsive to the IR light reflected from objects that are a short distance above or in contact with display surface 64a and corresponds to an image of IR light reflected from objects on or above the display surface. It will be apparent that digital video camera 68 will also respond to any IR light included in the ambient light that passes through display surface 64a from above and into the interior of the interactive display including ambient IR light that also travels along the path indicated by dotted line 84a.
IR light reflected from objects on or above the table surface may be:
reflected back through translucent layer 64b, through IR pass filter 86a and into the lens of digital video camera 68, as indicated by dashed lines 80a and 80b; or
reflected or absorbed by other interior surfaces within the interactive display without entering the lens of digital video camera 68, as indicated by dashed line 80c.
Translucent layer 64b diffuses both incident and reflected IR light. Thus, as explained above, “hover” objects such as hover object 76b that are closer to display surface 64a will reflect more IR light back to digital video camera 68 than objects of the same reflectivity that are farther away from the display surface. The digital video camera 68 senses the IR light reflected from “touch” and “hover” objects within its imaging field and produces a digital signal corresponding to images of the reflected IR light that is input to the PC 20 for processing to determine a location of each such object, and optionally, the size, orientation, and shape of the object. It should be noted that a portion of an object, such as a user's forearm, may be above the table while another portion, such as the user's finger, is in contact with the display surface. In addition, an object may include an IR light reflective pattern or coded identifier, such as a bar code, on its bottom surface that is specific to that object or to a class of related objects of which that object is a member. Accordingly, the imaging signal from the digital video camera 68 can also be used for detecting each such specific object, as well as determining its orientation, based on the IR light reflected from its reflective pattern, in accord with the present invention.
Embodiments of the present invention are thus operable in connection with recognizing an object and/or its position relative to the interactive display surface 64a by detecting its identifying characteristics using the IR light reflected from the object. The logical steps implemented to thus detect and identify an object and its orientation are explained in the commonly-assigned patent applications, including application Ser. No. 10/814,577, entitled “Identification Of Object On Interactive Display Surface By Identifying Coded Pattern,” and application Ser. No. 10/814,761, entitled “Determining Connectedness And Offset Of 3D Objects Relative To An Interactive Surface,” both of which were filed on Mar. 31, 2004. The disclosure and drawings of these two patent applications are hereby specifically incorporated herein by reference.
PC 20 may be integral to interactive display table 60 as shown in
If an interactive display table 60′ is connected to an external PC 20 (as in
An important and powerful feature of interactive display table 60 or 60′ (i.e., of either of the embodiments of the interactive display table discussed above) is its ability to display graphic images or a virtual environment for games or other software applications and to enable an interaction between the graphic image or virtual environment visible on display surface 64a and identify objects that are resting atop the display surface, such as object 76a, or are hovering just above it, such as object 76b.
Again referring to
Alignment devices 74a and 74b are provided and include threaded rods and rotatable adjustment nuts 74c for adjusting the angles of the first and second mirror assemblies to ensure that the image projected onto the display surface is aligned with the display surface. In addition to directing the projected image in a desired direction, the use of these two mirror assemblies provides a longer path between projector 70 and translucent layer 64b to enable a longer focal length (and lower cost) projector lens to be used with the projector.
The foregoing and following discussions describe an interactive display device in the form of interactive display table 60 and 60′. Nevertheless, it is understood that the interactive display surface need not be in the form of a generally horizontal table top. The principles described in this description of the invention suitably also include and apply to display surfaces of different shapes and curvatures and that are mounted in orientations other than horizontal. Thus, although the following description refers to placing physical objects “on” the interactive display surface, physical objects may be placed adjacent to the interactive display surface by placing the physical objects in contact with the display surface, or otherwise adjacent the display surface.
Exemplary Music Player Application for Using the Connecting User Interface
More specifically,
Because a limited number of album covers 400 are viewable at one time on interactive display surface 64a, a user may wish to scroll upward or downward through a list of available albums using scroll up button 404 and scroll down button 406. If a user is at a first or last screen of a list of album covers 400 including available songs, scroll up button 404, as shown by the dotted outline of scroll up button 404 in
In the music player application, interactive display screen 64a also presents standard playback controls 410. Playback controls 410 include a stop button 412, a play button 414, a pause button 416 (which may be combined with play button 414), a skip backward button 416, and a skip forward button 420. Playback controls 410 also include a volume control 422 that fills or otherwise changes the appearance of a plurality of bars 424, to indicate the relative current volume level.
As is typical of conventional music players, the music player application also includes a playlist 430. Playlist 430 appears in a window and shows a list of songs that have been selected to be played. Playlist 430 shows the order in which the songs are queued for playback. The order is useful in the event that the user wishes to use the skip backward button 418 or skip forward button 420 (unless the music player is set to a random or shuffle play mode), or wishes to add or delete songs to playlist 430. A status bar 440 displays user messages to indicate or confirm a current status of the player. Operation of status bar 440 is described in more detail below. Also described below are the operation of an add button 450, which is used to add songs to playlist 420, a remove button 460, and a clear-all button 470 that is activated to remove all songs from playlist 430.
In
Adding All Songs from an Album Using Connecting User Interface
As shown in
In
It should be noted that images presented in screen images of
The connecting user interface can confirm the function initiated by the user in a number of ways. In the embodiment shown in
As shown in
According to another embodiment of the present invention, option indicators also are provided to assist a user in knowing what actions may be available based on the user's current actions. For example, similar to the example shown in
To provide the user with an indication of what actions are available or needed to, for example, adds songs to playlist 430, option indicators also may be provided. In
Option indicators may be context-sensitive. Thus, for example, as is described below in connection with
Further, in addition to option indicator 560 identifying a next action available to the user, result indicator 570 may be provided to show what actions will result from the user selecting the option represented by option indicator 560. As shown in
In any of the cases illustrated above in connection with
Adding Songs from Multiple Albums Using Connecting User Interface
In
As indicated in
It should also be noted that embodiments of the present invention do not restrict user to selecting displayed objects one at a time. Users can make multiple selections at one time, as shown in
Adding Selected Songs from an Album Using Connecting User Interface
As shown in
As shown in
It should be noted that connection indicator 750a could appear between chosen song 712 and playlist 430, between chosen album cover 704 and playlist 430, or to indicate connections between other displayed elements to visually signal the results of user's actions. It also should be noted that connection indicator 750a differs from connection indicators 550a (
As shown in
Changing the Order of Identification Using Connecting User Interface
Because embodiments of the present invention are adaptable to present different connection indicators to indicate different user identifications and actions, connection indicators also can be changed to communicate different options or responses that are available when a user performs a task in a different sequence. More specifically, in the examples described in connection with
In
As shown in
Two points should be noted regarding
As shown in
Removing Songs Using Connecting User Interface
Embodiments of the connecting user interface may be used for functions other than adding songs to a playlist. For example, the connecting user interface can be employed to provide a visual confirmation of the removal of songs from a playlist.
In a manner comparable to that used for adding songs to playlist 430, songs may be removed by a user placing a finger or other physical object over remove button 460, as shown in
Subsequently, as shown in
In connection with
Thus,
As a result, if the user, by tapping a finger as previously described, has deleted a number of songs from playlist 430 and actually has deleted more songs than intended, appearance of deletion indicator may allow user to change his or her mind. Referring to
Employing the Connecting User Interface to Provide User Help
Additional Uses of the Connecting User Interface
Although use of the connecting user interface has been described extensively for the music player application, it should be emphasized that embodiments of the connecting user interface are usable with almost any operating system interface and many other types of applications. For example,
It will also be appreciated that the connecting user interface is usable with any operation in which a data file is operated on by an application. The connecting user interface may thus be employed with any number of document functions, such as open, copy, save, move, attach, and delete. In these cases, the user would identify the document or other data file to be accessed, using a finger or other physical object, and the user also would identify the application, insertion point, function button, or other displayed information associated with achieving the desired function.
Embodiments of connecting interface also are usable with games and recreational applications. For example, a solitaire game is typically included with Microsoft Corporation's Windows™ operating systems. In the conventional version of this game, a user manipulates virtual cards with a mouse or other pointing device. Using an embodiment of the present invention, as shown in
It should be appreciated that embodiments of the present invention allow multiple objects to be associated at one time, for example, as described in connection with
As can be seen in
Methods for Receiving User Input and Presenting Connection Indicators
At a decision step 1308, it is determined if the first item has been held for a requisite interval. As described above in connection with
At a decision step 1312, it is determined if the identifier, whether it be a finger or another physical object, has been dragged to identify other items, as described above in connection with
Once a first item is selected, at a decision step 1318, it is determined if a second item, such as an operator has been selected. If not, flow diagram 1300 continues to loop back to decision step 1318. On the other hand, once it is determined at decision step 1318 that a second item has been selected, at a decision step 1320, it is determined if the first item(s) and second item indicate a function. For example, if the first and second items are both album covers, the selections made do not indicate a valid function. On the other hand, if the first item is an album cover and the second item is the add button, a function is clearly indicated. Thus, if it is determined at decision step 1320 that the first and second items do not indicate a function, flow diagram 1300 loops to decision step 1318 for indication of another item. However, if a function is indicated, at a step 1322, a connection indicator illustrative of the function is presented. At a step 1324, the function is initiated, and at a step 1326, flow diagram 1300 ends.
It should be noted that flow diagram 1300 illustrates just one generalized approach usable by an embodiment of the present invention. However, as described above, an embodiment of the present invention may enable a user to select either operands or operators first, and generally, embodiments of the present invention are not limited to a particular selection sequence.
In
In
In
On the other hand, once it is determined at a decision step 1456 that one or more songs have been selected, at a step 1458, a connection indicator is shown. The connection indicator may show a connection between the delete button and the playlist, or the connection indicator may show another connection. At a step 1460, the songs identified are removed from the playlist. It will be understood that the songs could be removed from the playlist before the connection indicator is shown, or both steps could occur simultaneously. Flow diagram 1450 ends at a step 1452.
Although the present invention has been described in connection with the preferred form of practicing it and modifications thereto, those of ordinary skill in the art will understand that many other modifications can be made to the present invention within the scope of the claims that follow. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow.
Claims
1. A method for controlling functions of a plurality of elements presented on a display associated with a computing system, comprising the steps of:
- (a) responding to identifications of the plurality of elements presented on the display;
- (b) recognizing a function performable by the computing system as a result of a combination of the plurality of elements that were identified; and
- (c) performing the function involving at one of: (i) each of the plurality elements identified; and (ii) at least one of the plurality of elements that were identified and a portion of the display associated with the function that was recognized.
2. The method of claim 1, further comprising presenting a visual indication on the display indicating a connection between one of:
- (a) each of the plurality of elements identified; and
- (b) at least one of the plurality of elements that were identified and a portion of the display associated with the function that was recognized.
3. The method of claim 2, wherein the visual indication presented varies according to the function that was recognized.
4. The method of claim 1, further comprising presenting a visual option indicator on the display upon detecting a first identification indicating an option of a next identification, the visual option indicator including one of:
- (a) visually highlighting a choice of available operands when an operator is selected with the first identification; and
- (b) visually highlighting a choice of available operators when at least one operand is selected with the first identification.
5. The method of claim 1, wherein the identifications of the plurality of elements presented on the display are received substantially simultaneously.
6. The method of claim 1, wherein at least one of the identifications is performed by detecting a physical object disposed adjacent to the display, approximately at a location where one of the plurality of elements is displayed, and wherein the physical object includes at least one of:
- (a) an inanimate object; and
- (b) a part of a user's body.
7. The method of claim 6, wherein a plurality of identifications are made by a combination of physical objects including a plurality of:
- (a) a part of a first user's body;
- (b) a part of a second user's body;
- (c) an inanimate object manipulated by a first user; and
- (d) an inanimate object manipulated by a second user.
8. The method of claim 6, wherein the function recognized varies based on a time interval during which the physical object is disposed adjacent to the display at the location where said one of the plurality of elements is displayed.
9. The method of claim 1, wherein an order in which the plurality elements are identified changes at least one of:
- (a) the function recognized; and
- (b) the visual indication presented.
10. The method of claim 1, wherein the combination of the plurality of elements includes at least one operator and at least one operand, such that identification of the operator and the at least one operand causes a predefined function represented by the operator to be performed on the at least one operand.
11. The method of claim 10, further comprising the step of enabling a plurality of operands to be identified when a first operand is identified, and while an action causing the first operand to be identified continues, enabling an additional operand to be identified.
12. The method of claim 11, wherein the action causing the first operand to be identified includes the step of enabling pointing to the first operand; and wherein the action continues while the pointing is directed by dragging, to identify the additional operand.
13. The method of claim 10, where in a media player application, the operator represents a player function, and the at least one operand includes a representation of a media recording, and wherein the player function includes at least one of:
- (a) an add function that adds a media selection to a playlist;
- (b) a delete function that removes a media selection from playlist; and
- (c) a play function that plays one or more media selections in the playlist.
14. The method of claim 13, wherein the portion of the display associated with the function recognized includes one of:
- (a) a displayed playlist; and
- (b) the operator representing the player function.
15. The method of claim 10, where in a document processing application, the operator represents a document processing function, and the at least one operand represents a document, and wherein the document processing function includes at least one of:
- (a) a function to attach the document to an e-mail;
- (b) a function to delete the document;
- (c) an open document function;
- (d) a save document function;
- (e) a copy document function; and
- (f) a move document function.
16. The method of claim 10, where in a gaming application, the at least one operand represents a game element, and the operator represents a game function.
17. The method of claim 1, where in a help facility, the function recognized is providing instruction to a user regarding operation of the computing system, and the visual indication illustrates consequences of a particular user action.
18. The method of claim 1, wherein the visual indication presented on the display indicating the connection includes at least one of:
- (a) a linear connection; and
- (b) an animation.
19. A memory medium on which are stored machine instructions for carrying out the steps of claim 1.
20. A method for providing a visual feedback of a selection of an operator and at least one operand presented on a display that is coupled to a computing system, comprising the steps of:
- (a) receiving identifications of the operator and the at least one operand;
- (b) recognizing a function performable by the computing system by a pattern of the identifications of the operator and the at least one operand;
- (c) presenting a visual indication on the display that is indicative of the function recognized, indicating a connection between one of: (i) the operator and the at least one operand; and (ii) the at least one operand and a portion of the display associated with the function recognized; and
- (d) causing the function recognized to be initiated on the at least one operand.
21. The method of claim 20, further comprising presenting a visual option indicator on the display upon detecting a first identification indicating an option of a next identification, the visual option indicator including one of:
- (a) visually highlighting a choice of available operands when an operator is selected with the first identification; and
- (b) visually highlighting a choice of available operators when at least one operand is selected with the first identification.
22. The method of claim 20, wherein the step of receiving the identifications of the operator and the at least one operand occurs one of:
- (a) substantially simultaneously;
- (b) sequentially, without regard to order; and
- (c) sequentially, in a predefined order.
23. The method of claim 20, wherein at least one of the identifications is performed by detecting a physical object disposed adjacent to the display, approximately at a location where one of the plurality of elements is displayed, and the physical object includes at least one of:
- (a) a portion of a user's body; and
- (b) an inanimate object.
24. The method of claim 23, wherein a plurality of identifications are made by a combination of physical objects including a plurality of:
- (a) a part of a first user's body;
- (b) a part of a second user's body;
- (c) an inanimate object manipulated by a first user; and
- (d) an inanimate object manipulated by a second user.
25. The method of claim 23, wherein the function recognized varies based on a time interval during which the physical object is disposed adjacent to the display at the location where one of the plurality of elements is displayed.
26. The method of claim 20, further comprising the steps of enabling a plurality of operands to be identified when a first operand is identified; and while the first operand is being identified, identifying an additional operand.
27. The method of claim 26, wherein the first operand is identified by a pointing directed to the first operand, and wherein the additional operand is identified by dragging the pointing to be directed to the additional operand.
28. The method of claim 20, where in a media player application, the operator represents a player function, and the at least one operand includes a representation of a media recording, and wherein the player function includes at least one of:
- (a) a function to add a selection to a playlist;
- (b) a delete function to delete a selection from the playlist; and
- (c) a play function to play any selection in the playlist.
29. The method of claim 28, wherein the portion of the display associated with the function recognized includes one of:
- (a) the playlist; and
- (b) the operator representing the player function.
30. The method of claim 20, where in a data processing application, the operator represents a data processing function, and the at least one operand represents a data file, and wherein the data file processing function includes at least one of:
- (a) a function to attach the data file to an e-mail;
- (b) a delete data file function;
- (c) an open data file function;
- (d) a save data file function;
- (e) an edit data file function;
- (f) a copy data file function; and
- (g) a move data file function.
31. The method of claim 20, where in a gaming application, the at least one operand represents a game element, and the operator represents a game function.
32. The method of claim 20, where in a help facility, the function recognized is providing instruction to a user regarding operation of the computing system; and wherein the visual indication illustrates consequences of a particular user action.
33. The method of claim 20, wherein the visual indication presented on the display indicating the connection includes at least one of:
- (a) a linear connection; and
- (b) an animation.
34. A memory medium on which are stored machine instructions for carrying out the steps of claim 20.
35. A system for providing an input to an application that is being executed, comprising:
- (a) an interactive display surface adjacent to which a plurality of physical objects are manipulated, said interactive display surface presenting visual entities;
- (b) a sensor that detects a location of the physical objects that are adjacent to the interactive display surface;
- (c) a processor in communication with the light sensor and the display device; and
- (d) a memory in communication with the processor, the memory storing data and machine instructions that cause the processor to carry out a plurality of functions, including: (i) receiving identifications of a plurality of the visual entities, the identifications being made by the processor as a function of the locations of the physical objects adjacent to the plurality of visual entities presented on the interactive display surface; (ii) recognizing a function that is to be performed by the system based upon a combination of the visual entities that were identified; and (iii) performing the function between one of: (A) each of the visual entities that were identified; and (B) at least one of the visual entities that was identified and a portion of the interactive display surface that is associated with the function recognized.
36. The system of claim 35, wherein the machine language instructions further cause the processor to present a visual indication on the interactive display surface indicating a connection between one of:
- (a) each of the visual identifies that were identified; and
- (b) at least one of the visual identifies that was identified and the portion of the interactive display surface that is associated with the function recognized.
37. The system of claim 36, wherein the machine language instructions further cause the processor to vary the presentation of the visual indication according to the function that was recognized.
38. The system of claim 35, wherein the machine language instructions further cause the processor to present a visual option indicator on the display upon detecting a first identification indicating an option of a next identification, the visual option indicator including one of:
- (a) visually highlighting a choice of available operands when an operator is selected with the first identification; and
- (b) visually highlighting a choice of available operators when at least one operand is selected with the first identification
39. The system of claim 35, wherein the identifications of the plurality of visual entities presented on the display are received substantially simultaneously.
40. The system of claim 35, wherein an identification of one of the visual entities comprising the combination is made by the sensor and is determined to be maintained while an identification of another of the visual entities comprising the combination is being made by the sensor.
Type: Application
Filed: Nov 18, 2004
Publication Date: May 25, 2006
Patent Grant number: 7925996
Applicant: Microsoft Corporation (Redmond, WA)
Inventors: Nicholas Hofmeister (Seattle, WA), Dane Storrusten (Seattle, WA), Ann Valencia (Seattle, WA)
Application Number: 10/991,813
International Classification: G06F 3/00 (20060101); G06F 17/00 (20060101); G06F 9/00 (20060101);