Abstract: A system and method for providing a 3D gesture based user interface with haptic feedback is disclosed. A processing system providing the 3D system provides haptic feedback by capturing image data. A gesture is then detected. The detected gesture is then used to determine an appropriate haptic feedback is determined. A signal that indicates the appropriate haptic feedback is generated and provided to a haptic feedback device. The haptic feedback devices then provides the appropriate haptic feedback.
Abstract: Embodiments in accordance with this invention disclose systems and methods for implementing head tracking based graphical user interfaces that incorporate gesture reactive interface objects. The disclosed embodiments perform a method in which a GUI includes interface objects is rendered and displayed. Image data of an interaction zone is captured. A targeting gestured targeting a targeted interface object is detected in the captured image data and a set of 3D head interaction gestures are enabled. Additional image data is captured. Motion of at least a portion of a human head is detected and one of the 3D head interactions is identified. The rendering of the interface is modified in response to the detection of one of the 3D head interactions and the modified interface is displayed.
Type:
Grant
Filed:
January 7, 2015
Date of Patent:
November 29, 2016
Assignee:
Aquifi, Inc.
Inventors:
Carlo Dal Mutto, Giulio Marin, Abbas Rafii, Tony Zuccarino
Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.
Abstract: A method for operating a real-time gesture based interactive system includes: obtaining a sequence of frames of data from an acquisition system; comparing successive frames of the data for portions that change between frames; determining whether any of the portions that changed are part of an interaction medium detected in the sequence of frames of data; defining a 3D interaction zone relative to an initial position of the part of the interaction medium detected in the sequence of frames of data; tracking a movement of the interaction medium to generate a plurality of 3D positions of the interaction medium; detecting movement of the interaction medium from inside to outside the 3D interaction zone at a boundary 3D position; shifting the 3D interaction zone relative to the boundary 3D position; computing a plurality of computed positions based on the 3D positions; and supplying the computed positions to control an application.
Type:
Grant
Filed:
September 30, 2016
Date of Patent:
April 11, 2017
Assignee:
Aquifi, Inc.
Inventors:
Carlo Dal Mutto, Giuliano Pasqualotto, Giridhar Murali, Michele Stoppa, Amir hossein Khalili, Ahmed Tashrif Kamal, Britta Hummel
Abstract: User interaction with a display is detected substantially simultaneously using at least two cameras whose intersecting FOVs define a three-dimensional hover zone within which user interactions can be imaged. Separately and collectively image data is analyzed to identify a relatively few user landmarks. A substantially unambiguous correspondence is established between the same landmark on each acquired image, and a three-dimensional reconstruction is made in a common coordinate system. Preferably cameras are modeled to have characteristics of pinhole cameras, enabling rectified epipolar geometric analysis to facilitate more rapid disambiguation among potential landmark points. Consequently processing overhead is substantially reduced, as are latency times. Landmark identification and position information is convertible into a command causing the display to respond appropriately to a user gesture.