Multi-touch GUI featuring directional compression and expansion of graphical content
A computing system receives user input via a touch-interface that involves motion of one or more touches relative to the touch-interface. Responsive to the user input including two or more concurrent touches of the touch-interface involving motion in a coordinate direction, the computing system compresses graphical content within a graphical user interface in the coordinate direction toward a reference datum line. Responsive to the user input including two or more concurrent touches of the touch-interface involving motion in an opposite direction from the coordinate direction, the computing system expands graphical content within the graphical user interface in the opposite direction and away from the reference datum line.
Latest ETTER STUDIO LTD. Patents:
- MULTI-TOUCH GUI FEATURING DIRECTIONAL COMPRESSION AND EXPANSION OF GRAPHICAL CONTENT
- Multi-touch GUI featuring directional compression and expansion of graphical content
- Multi-object acoustic sensing
- MULTI-TOUCH GUI FEATURING DIRECTIONAL COMPRESSION AND EXPANSION OF GRAPHICAL CONTENT
- MULTI-OBJECT ACOUSTIC SENSING
The present application is a continuation application that claims the benefit of and priority to U.S. patent application Ser. No. 14/618,443, titled MULTI-TOUCH GUI FEATURING DIRECTIONAL COMPRESSION AND EXPANSION OF GRAPHICAL CONTENT, filed Feb. 10, 2015, and issuing as U.S. Pat. No. 10,031,638 on Jul. 24, 2018. The entire contents of this priority application are incorporated herein by reference in their entirety for all purposes.
BACKGROUNDComputing systems rely on user input to control their operations. User input may take various forms including keystrokes, mouse clicks, voice commands, touches of a touch-interface, etc. Computing systems that include or otherwise operatively linked with a touch-interface may support user input in the form of single-touch and multi-touch gestures involving motion of one or more touches relative to the touch-interface. Respective commands may be associated with a variety of single-touch and multi-touch gestures to control operations of the computing system. Examples include a multi-touch pinch gesture to zoom into a region of a graphical user interface and a single-touch sliding gesture to translate graphical content within the GUI, such as for scrolling or panning.
SUMMARYAccording to an aspect of the present disclosure, a computing system receives user input via a touch-interface that involves motion of one or more touches relative to the touch-interface. Responsive to the user input including two or more concurrent touches of the touch-interface involving motion in a coordinate direction, the computing system compresses graphical content within a graphical user interface in the coordinate direction toward a reference datum line. Responsive to the user input including two or more concurrent touches of the touch-interface involving motion in an opposite direction from the coordinate direction, the computing system expands graphical content within the graphical user interface in the opposite direction and away from the reference datum line. This summary introduces a selection of concepts described in further detail herein. Accordingly, this summary is intended to be non-limiting with respect to the subject matter further described by the detailed description and associated drawings.
Users of a computing system incorporating or linked to a touch-interface benefit from a wide range of available touch-based user inputs that are supported by the computing system. Users may encounter a variety of use-scenarios while interacting with a graphical user interface (GUI) of an operating system or the near infinite quantity and variety of application programs that may be executed by the computing system. As more and more interaction happens on graphical displays within GUIs, and as graphical displays vary in size and functionality across devices, there is a growing need to display information in an intuitive and scalable manner.
The present disclosure is directed to compressing and/or expanding graphical content within a GUI in one or two dimensions responsive to motion of a touch-based user input. Within a single dimension, graphical content within the GUI is compressed toward a reference datum line responsive to motion of a touch-based user input toward the reference datum line, and the graphical content is expanded away from the reference datum line responsive to opposite motion of the touch-based user input. The touch-based user input may take the form of a pre-defined type involving a different number of concurrent touches, such as a multi-touch user input involving two or more concurrent touches, to thereby distinguish compression and expansion commands from translation-based scrolling or panning commands involving a single touch.
Computing system 100 outputs a graphical user interface (GUI) 140 for presentation at a graphical display 132 via input/output subsystem 130. Graphical display 132 may form part of and/or may be integrated with computing system 100 in a common enclosure. Alternatively, graphical display 132 may be implemented by or as a standalone device that is in communication with and operatively linked to computing system 100 via a wired or wireless communications link. GUI 140 is depicted in
Computing system 100 receives user input at a touch-interface 134 via input/output subsystem 130. Touch-interface 134 may form part of and/or may be integrated with computing system 100 in a common enclosure. Alternatively, touch-interface 134 may be implemented as or by a standalone device that is in communication with and operatively linked to computing system 100 via a wired or wireless communications link. Touch-interface 134 may use any suitable technology to identify a position of one or more touches of physical objects upon a surface or within an observed region. Such technologies may include optical sensing, capacitive sensing, resistive sensing, acoustic sensing, etc.
In at least some implementations, touch-interface 134 may be implemented in combination with graphical display 132, such as with touch-sensitive graphical displays. Non-limiting examples include tablet computers or mobile computers (e.g., mobile smart phones) that include integrated touch-sensitive display devices. In these implementations, touch interactions with touch-interface 134 correspond to the same point within GUI 140 of graphical display 132. For example, in
Storage subsystem 120 of computing system 100 includes one or more physical, non-transitory storage devices (e.g., hard drive, flash memory device, etc.) having instructions 170 stored thereon that are executable by one or more physical, non-transitory logic devices (e.g., one or more processor devices and/or other suitable logic devices) of logic subsystem 110 to perform one or more tasks, operations, processes, etc.
Instructions 170 may include software and/or firmware held by storage subsystem 120 in non-transitory form. As a non-limiting example, instructions 170 include an operating system 172 and one or more application programs 174. An application program may include a stand-alone application program or a network-linked application program, such as a web browser that downloads and executes network resources received over a communications network. Operating system 172 may include an application programming interface (API) 176 that enables application programs 174 to interact with operating system 170 and other components of computing system 100. As an example, operating system 172 receives and processes user inputs directed at touch-interface 134, which may be communicated to an application program via API 176. The application program receives, processes, and responds to the user inputs received from the operating system, for example, by directing the operating system to update GUI 140 or a portion thereof. It will be understood that the logic subsystem of computing system 100 generates and initiates display of a GUI based on instructions held in the storage subsystem.
Within the example sequence depicted in
Compression in the examples depicted in
At 510, the method includes receiving user input via a touch-interface. The user input includes one or more touches of the touch-interface, and may involve motion relative to the touch-interface. As an example, a user may touch a surface of the touch-interface with a single touch at a first location and translate that single touch along the surface to a second location of the touch-interface. As another example, a user may touch a surface of the touch-interface with two concurrent touches at a first location or region and translate the two concurrent touches along the surface to a second location or region of the touch-interface. As yet another example, a user may touch a surface of the touch-interface with three or more concurrent touches at a first location or region and translate the two touches along the surface to a second location or region of the touch-interface. Touches may be provided by a body part of the user (e.g., a finger) or may be provided by an implement (e.g., a stylus), as non-limiting examples.
At 512, the method includes identifying whether the user input includes a single-touch user input or a multi-touch user input. A multi-touch user input includes two or more concurrent touches of the touch-interface.
At 514, the method includes identifying a direction and/or magnitude of motion for the touch event (e.g., including the single-touch user input or the multi-touch user input) along each coordinate axis. Motion may be measured as one or more of: (1) a distance traveled (i.e., translation) by the touch along and relative to the touch-interface, (2) a velocity of the touch, and/or (3) an acceleration of the touch. Motion of each touch received via the touch-interface may be defined by a motion vector having a direction and magnitude. A motion vector may be formed from a combination of two vector components measured along each coordinate axis of the touch-interface and/or associated GUI. For example, a motion vector may include a horizontal vector component having a direction and magnitude along the horizontal coordinate axis and a vertical vector component having a direction and magnitude along the vertical coordinate axis.
At 516, the method includes applying filter criteria to motion along each coordinate axis to obtain filtered user input. In at least some implementations, filtering of touch-based user input involving motion may be used to determine whether compression, expansion, translation, etc. are to be performed for the touch event, or whether the motion is insignificant or incidental to a touch-based user input intended by the user.
Vector components of the motion along two coordinate axes may be independently filtered based on the same or different the filter criteria. In at least some implementations, filter criteria may include a minimum magnitude (e.g., a minimum threshold distance of travel for a touch) along a coordinate axis to be considered a significant user input. In such case, a distance of travel of the touch along a coordinate axis that is less than the minimum threshold distance may be considered an insignificant or incidental user input and removed from the filtered user input.
This technique may be used, for example, to resolve motion in two coordinate axes into a significant user input along a first coordinate axis and an insignificant user input along a second coordinate axis.
In at least some implementations, touch-based motion may be constrained (e.g., by an operating system and/or application program) to a single coordinate axis, such as a vertical coordinate axis or a horizontal coordinate axis. In this implementation, a lower minimum threshold distance may be applied to the single coordinate axis and a higher minimum threshold distance (e.g., an infinite threshold) may be applied to the other coordinate axis to exclude motion in that other coordinate axis. For example,
At 518, if the user input or the filtered user input includes a multi-touch user input, then the process flow proceeds to operations 520-526. The filtered user input may be used in operations 520-526 if filter criteria are applied to the user input. In implementations where filter criteria are not applied, the unfiltered user input may be used in operations 520-526. Unfiltered user input and its filtered user input may be associated with the touch event via the touch event identifier.
At 520, the method includes identifying a current compression or expansion state of the GUI or portion thereof. In at least some implementations, a GUI may support or otherwise include two or more discrete states of compression or expansion. As an example, a GUI may include a first state (e.g., a compressed state) and a second state (e.g., an expanded state). As another example, a GUI may include a first state (e.g., an expanded state), a second state (e.g., a regular state), and a third state (e.g., a compressed state). In yet another example, a GUI may include 3 or more, 10 or more, hundreds or more, thousands or more, or near infinite quantity of states to provide the appearance of continuous compression and/or expansion across a range of states. Individual states may be referred to as snap-points that enable a user to compress or expand the GUI or a targeted region of the GUI between a limited quantity of predefined snap-points, such as a maximum snap-point and minimum snap-point.
At 522, the method includes identifying a location of the user input within the GUI or portion thereof. A user may direct a touch-based user input at the GUI to define a limited region within the GUI to which compression or expansion is performed. For example, a user may direct a touch-based user input at a window or a graphical content item of a GUI to indicate focus on that window or graphical content item. Subsequent movement of the touch-based user input may limit compression or expansion to that window or graphical content item. However, compression or expansion may be applied to the entire GUI in other examples.
In at least some implementations, two reference datum lines are provided on each opposing side of the graphical content of a GUI. For example,
At 524, the method includes compressing graphical content within a graphical user interface in the coordinate direction toward a reference datum line. The method at 524 is performed responsive to the user input including two or more touches (or another suitable pre-defined quantity of concurrent touches) of the touch-interface involving motion relative to the touch-interface in a coordinate direction. Compression performed at 524 may include compressing the graphical content in the coordinate direction by an amount that is based on a magnitude of the vector component in that coordinate direction. In at least some implementations, a scaling factor may be applied to the magnitude of the vector component in a coordinate direction to obtain a compression magnitude that defines an amount of compression in that coordinate direction.
Compressing the graphical content within the graphical user interface in the coordinate direction toward the reference datum line may include compressing the graphical content from a first state (e.g., the current state identified at 520) to a second state (e.g., a state of higher compression). Compressing from the first state to the second state may be through one or more intermediate states, and may provide the appearance of continuous compression across a range of states. As previously described, the reference datum line may be identified based on a location of the multi-touch user input event and/or the vector components of motion of the user input identified at 514.
Compression and expansion of graphical content may take various forms. In at least some implementations, compressing may include reducing a geometric size dimension of graphical content in one or more coordinate directions, and expanding may include increasing a geometric size dimension of graphical content in one or more coordinate directions.
For example,
In further implementations, compressing and/or expanding the GUI or a portion thereof may include varying an amount and/or type of text information that is presented within a given graphical element, such as depicted in
Compression may be performed across the GUI beginning at the touch-based user input and ending at the reference datum line. In at least some implementations, compression may begin at and applied from the furthest touch from the reference datum line (e.g., touch 220 in
Where two opposing reference datum lines are provided, the previously described reference datum line may refer to a first reference datum line that is perpendicular to the coordinate direction that is located on a first side of the graphical content. Method 500 may further include, responsive to the user input including two or more concurrent touches of the touch-interface involving motion relative to the touch-interface in an opposite direction to the coordinate direction, compressing the graphical content within the graphical user interface in the opposite direction toward a second reference datum line. For example, compression may be performed toward reference datum line 241 rather than toward reference datum line 240 in
At 526, the method includes expanding graphical content within the graphical user interface in the direction opposite to the coordinate direction and away from the reference datum line. The method at 526 is performed responsive to the user input including two or more touches (or another suitable pre-defined quantity of concurrent touches) of the touch-interface involving motion relative to the touch-interface in a direction opposite to the coordinate direction. Expansion performed at 526 may include expanding the graphical content in the opposite direction by an amount that is based on a magnitude of the vector component in that opposite direction. In at least some implementations, a scaling factor may be applied to the magnitude of the vector component in the opposite direction to obtain an expansion magnitude that defines an amount of expansion in that coordinate direction.
Expanding the graphical content within the graphical user interface in the coordinate direction away from the reference datum line may include expanding the graphical content from a first state (e.g., the current state identified at 520) to a second state (e.g., a state of lesser compression). Expanding from the first state to the second state may be through one or more or many intermediate states. A quantity of states of compression or expansion may vary depending on implementation. In at least some implementations, fully compressed state may correspond to an entire view of a GUI or a portion thereof within a given field of view to provide the user with an entire view of the graphical content within one or more dimensions.
As previously described, the reference datum line may be identified based on a location of the multi-touch user input event and/or the vector components of motion of the user input identified at 514. For example, a location of the user input within the GUI or portion thereof may inform whether expansion is to be performed away from a first reference datum line located on a first side of the GUI or away from a second reference datum line located on a second side of the GUI opposing the first side. Depending on implementation, expansion may be performed away from the closest reference datum line or away from the furthest opposing reference datum line from the location of the user input.
Within the context of compression in two-dimensions involving two orthogonal coordinate axes, the above described coordinate direction may refer to a first coordinate direction, and the reference datum line refers to a first reference datum line having an axis perpendicular to an axis of the first coordinate direction. In the context of two-dimensional compression, the method at 524 further includes, responsive to the user input including two or more touches of the touch-interface involving motion relative to the touch-interface in a second coordinate direction having an axis perpendicular to the axis of the first coordinate direction, compressing graphical content within the graphical user interface in the second coordinate direction toward a second reference datum line that is perpendicular to the first reference datum line.
In the context of two-dimensional expansion, the method at 526 further includes, responsive to the user input including two or more touches of the touch-interface involving motion relative to the touch-interface in an opposite direction to the second coordinate direction, expanding the graphical content within the graphical user interface in the opposite direction to the second coordinate direction and away from the second reference datum line that is perpendicular to the first reference datum line. Hence, compression and/or expansion may be supported in two different dimensions using a common multi-touch user input. It will be appreciated that some implementations may support compression in a first coordinate direction with concurrent expansion in a second coordinate direction orthogonal to the first coordinate direction by applying the previously described aspects of method 500.
Alternatively, if the user input includes a single-touch user input, then the process flow proceeds to processes 528 and/or 530. At 528, the method includes translating graphical content within the graphical user interface in the coordinate direction toward the reference datum line without compressing the graphical content responsive to the user input including a single touch (or another suitable pre-defined quantity of concurrent touches) of the touch-interface involving motion relative to the touch-interface in the coordinate direction. At 530, the method includes translating graphical content within the graphical user interface in the direction opposite to the coordinate direction and away from the reference datum line without expanding the graphical content responsive to the user input including a single touch (or another suitable pre-defined quantity of concurrent touches) of the touch-interface involving motion relative to the touch-interface in the direction opposite to the coordinate direction. The method at 528 and 530 may be used to provide scrolling or panning of the GUI or a portion thereof.
While method 500 describes compression and/or expansion being performed responsive to multi-touch user inputs, and translation being performed responsive to single-touch user inputs, in other implementations, compression and/or expansion may be performed responsive to single-touch user inputs and translation maybe performed responsive to multi-touch user inputs. In further implementations, compression may be performed responsive to two concurrent touches of a multi-touch user input, expansion may be performed responsive to three or more concurrent touches of a multi-touch user input, and translation may be performed responsive to a single-touch user input. In still further implementations, compression may be performed responsive to three or more concurrent touches of a multi-touch user input, expansion may be performed responsive to two concurrent touches of a multi-touch user input, and translation may be performed responsive to a single-touch user input.
Hence, translating/panning of a GUI may be distinguished from compression and/or expansion by a quantity of concurrent touches of the GUI or GUI content or region. Similarly, compression and/or expansion may be distinguished from each other based on a quantity of concurrent touches in addition to or as an alternative to the direction of motion of the one or more touches. Within this context, operations 512 and/or 518 of method 500 may additionally or alternatively include determining a quantity of concurrent touches (e.g., single touch, two touches, three touches, four touches, five touches, etc.), in which a different quantity of concurrent touches is associated with a respective command (e.g., translate/pan, compress, expand, etc.).
In at least some implementations, the quantity of concurrent touches associated with each command may be user-defined to enable a user to set or adjust the quantity of concurrent touches associated with translating/panning the GUI or a portion thereof, compressing the GUI or a portion thereof, and/or expanding the GUI or a portion thereof. Setting or otherwise defining the quantity of concurrent touches may be on an application-specific basis, an operating system-wide basis, and/or may span two or more application programs and/or an operating system.
While the present disclosure provides numerous examples of touch-based user input that define compression, expansion, and/or translating/panning commands for a GUI, other suitable forms of user input may be used. In one example, compression and/or expansion may be performed responsive to input received via a pointing device such as a mouse. In this example, a single mouse click of a GUI or a portion thereof may provide a translating/panning command, whereas a single mouse click plus a keyboard input (e.g., concurrent pressing of a shift key of a keyboard in combination with the mouse click) provides a compression and/or expansion command. In this example, a graphical selector of the pointing device may take the form of at least one touch input, and motion of the graphical selector may define the direction of compression and/or expansion relative to one or more reference datum lines. In still further examples, voice-based user input may be used to initiate compression and/or expansion within a GUI or a portion thereof. Hence, it will be understood that different user inputs received via any suitable user input device may be used to define whether translating/panning, compression, or expansion is to be performed.
In at least some implementations, a reference datum line may correspond to a boundary of the GUI (e.g., a border of GUI 140 of
Furthermore, while the examples described herein refer to compression toward a reference datum line and expansion away from the reference datum line, in other implementations, expansion may be performed toward a reference datum line in a direction of motion of the user input and/or compression may be performed away from the reference datum line in a direction of motion of the user input.
It is to be understood that the configurations and/or techniques described herein are exemplary in nature, and that specific examples or embodiments are not to be considered in a limiting sense, because numerous variations are possible. The specific methods or processes described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed. The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various methods, processes, systems, configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof. Variations to the disclosed embodiments that fall within the metes and bounds of the claims, now or later presented, or the equivalence of such metes and bounds are embraced by the claims.
Claims
1. A method performed by a computing system, the method comprising:
- presenting a graphical user interface including at least one window containing graphical content via a graphical display device;
- receiving user input directed at the window via a touch-interface, the user input including two or more concurrent touches of the touch-interface involving motion relative to the touch-interface; and
- responsive to the motion of each of the two or more concurrent touches of the user input having a motion vector component in an identical direction along a same coordinate axis, compressing the graphical content, within a region of the graphical user interface limited to the window, in a coordinate direction corresponding to the identical direction toward a reference datum line while continuing to present the graphical content within the window of the graphical user interface and while maintaining a size of the window within the graphical user interface;
- wherein the coordinate direction is parallel to or collinear with the same coordinate axis, and the identical direction is toward the reference datum line;
- wherein the reference datum line is located at an edge of the window of the graphical user interface that contains the graphical content; and
- wherein additional graphical content associated with the window emerges from an opposite edge of the window and is presented within the window responsive to compressing the graphical content.
2. The method of claim 1, further comprising:
- receiving additional user input via the touch-interface, the additional user input including two or more concurrent touches of the touch-interface involving motion relative to the touch-interface; and
- responsive to the motion of each of the two or more concurrent touches of the additional user input having a motion vector component in an identical direction that is opposite the coordinate direction, expanding graphical content within the graphical user interface in an opposite coordinate direction from the coordinate direction and away from the first reference datum line.
3. The method of claim 1, further comprising:
- receiving additional user input via the touch-interface, the additional user input including a single touch of the touch-interface involving motion relative to the touch-interface; and
- responsive to the motion of the single touch of the additional user input, translating graphical content within the graphical user interface.
4. The method of claim 3, wherein the translating of the graphical content is performed without compressing or expanding the graphical content.
5. The method of claim 1,
- wherein the same coordinate axis is a horizontal coordinate axis; and
- wherein the reference datum line is a vertical reference datum line orthogonal to the horizontal coordinate axis.
6. The method of claim 1,
- wherein the same coordinate axis is a vertical coordinate axis; and
- wherein the reference datum line is a horizontal reference datum line orthogonal to the vertical coordinate axis.
7. The method of claim 1, wherein a position of the reference datum line within the graphical user interface is user-defined; and
- wherein the method further comprises receiving another user input defining the position of the first reference datum line.
8. The method of claim 1, wherein compressing the graphical content includes revealing a greater quantity of graphical elements within the graphical user interface.
9. The method of claim 1, wherein compressing the graphical content includes compressing the graphical content from a first state to a second state.
10. The method of claim 9, wherein compressing the graphical content from the first state to the second state includes compressing the graphical content through one or more intermediate states.
11. A computing device, comprising:
- one or more storage devices having instructions stored thereon, executable by one or more logic devices to: present a graphical user interface including at least one window containing graphical content via a graphical display device; receive user input directed at the window via a touch-interface, the user input including two or more concurrent touches of the touch-interface involving motion relative to the touch-interface; and responsive to the motion of each of the two or more concurrent touches of the user input having a motion vector component in an identical direction along a same coordinate axis that points toward a reference datum line, compress the graphical content, within a region of the graphical user interface limited to the window, in a coordinate direction corresponding to the identical direction toward the reference datum line while continuing to present the graphical content within the window of the graphical user interface and while maintaining a size of the window within the graphical user interface; wherein the coordinate direction is parallel to or collinear with the same coordinate axis, and the identical direction is toward the reference datum line; wherein the reference datum line is located at an edge of the window of the graphical user interface that contains the graphical content; and wherein additional graphical content associated with the window emerges from an opposite edge of the window and is presented within the window responsive to compressing the graphical content.
12. The computing device of claim 11,
- wherein the same coordinate axis is a horizontal coordinate axis of the graphical user interface; and
- wherein the reference datum line is a vertical reference datum line orthogonal to the horizontal coordinate axis.
13. The computing device of claim 11,
- wherein the same coordinate axis is a vertical coordinate axis of the graphical user interface; and
- wherein the reference datum line is a horizontal reference datum line orthogonal to the vertical coordinate axis.
14. The computing device of claim 11, wherein the instructions are further executable by the one or more logic devices to:
- receive additional user input via the touch-interface, the additional user input including a single touch of the touch-interface involving motion relative to the touch-interface; and
- responsive to the motion of the single touch of the additional user input, translating graphical content within the graphical user interface.
15. The computing device of claim 11, wherein the graphical content is compressed by revealing a greater quantity of graphical elements within the graphical user interface; and
- wherein the graphical content is expanded by revealing a lesser quantity of graphical elements within the graphical user interface.
7030861 | April 18, 2006 | Westerman |
8345014 | January 1, 2013 | Lim |
20060026521 | February 2, 2006 | Hotelling |
20080165255 | July 10, 2008 | Christie |
20100134425 | June 3, 2010 | Storrusten |
20100177053 | July 15, 2010 | Yasutake |
20110078560 | March 31, 2011 | Weeldreyer |
20110202834 | August 18, 2011 | Mandryk |
20110239155 | September 29, 2011 | Christie |
20120162265 | June 28, 2012 | Heinrich |
20120274583 | November 1, 2012 | Haggerty |
20130141375 | June 6, 2013 | Ludwig |
20130254662 | September 26, 2013 | Dunko |
20140218393 | August 7, 2014 | Lee |
20140298253 | October 2, 2014 | Jin |
20140298266 | October 2, 2014 | Lapp |
20140309868 | October 16, 2014 | Ricci |
20150301673 | October 22, 2015 | Peshkin |
20170357317 | December 14, 2017 | Chaudhri |
- C. Wang, J. Dai and L. Wang, “3D Multi-touch recognition based virtual interaction,” 2010 3rd International Congress on Image and Signal Processing, Yantai, 2010, pp. 1478-1481, doi: 10.1109/CISP.2010.5646357. (Year: 2010).
Type: Grant
Filed: Jul 24, 2018
Date of Patent: Nov 17, 2020
Patent Publication Number: 20190018549
Assignee: ETTER STUDIO LTD. (Zurich)
Inventor: Christian Etter (Zurich)
Primary Examiner: William L Bashore
Assistant Examiner: Nathan K Shrewsbury
Application Number: 16/044,282
International Classification: G06F 3/0481 (20130101); G06F 3/0482 (20130101); G06F 3/0484 (20130101); G06F 3/0488 (20130101);