Abstract: In various embodiments, the present invention provides a system and associated methods of calibration and use for an interactive imaging environment based on the optimization of parameters used in various segmentation algorithm techniques. These methods address the challenge of automatically calibrating an interactive imaging system, so that it is capable of aligning human body motion, or the like, to a visual display. As such the present invention provides a system and method of automatically and rapidly aligning the motion of an object to a visual display.
Abstract: Systems and methods configured to facilitate multi-modal user inputs in lieu of physical input for a processing device configured to execute an application include obtaining non-physical input for processing device and the application, wherein the physical input comprises one or more of touch-based input and tilt input; processing the non-physical input to convert into appropriate physical input commands for the application; and providing the physical input commands to the processing device.
Type:
Application
Filed:
October 27, 2015
Publication date:
February 25, 2016
Applicant:
PLAYVISION LABS, INC.
Inventors:
Matthew Flagg, Jeremy Barrett, Scott Wills, Sean Durkin, Vinod Valloppillil
Abstract: In various embodiments, the present invention provides a system and associated methods of calibration and use for an interactive imaging environment based on the optimization of parameters used in various segmentation algorithm techniques. These methods address the challenge of automatically calibrating an interactive imaging system, so that it is capable of aligning human body motion, or the like, to a visual display. As such the present invention provides a system and method of automatically and rapidly aligning the motion of an object to a visual display.
Abstract: A computer-implemented method, a system, and software includes providing output from a touch-based device to an external display; detecting gestures from a user located away from and not physically touching the touch-based device; and translating the detected gestures into appropriate commands for the touch-based device. The systems and methods provide alternative control of touch-based devices such as mobile devices. The systems and methods can include a mobile device coupled to an external display device and controlled via user gestures monitored by a collocated sensor. Accordingly, the systems and methods allow users to operate applications (“apps”) on the mobile device displayed on the external display device and controlled without touching the mobile device using gestures monitored by the collocated sensor. This enables the wide variety of rich apps to be operated in a new manner.
Type:
Application
Filed:
November 8, 2013
Publication date:
May 8, 2014
Applicant:
PLAYVISION LABS, INC.
Inventors:
Matthew Flagg, Jeremy Barrett, Scott Wills, Sean Durkin, Vinod Valloppillil
Abstract: The present disclosure provides a system and method for enabling meaningful body-to-body interaction with virtual video-based characters or objects in an interactive imaging environment including: capturing a corpus of video-based interaction data, processing the captured video using a segmentation process that corresponds to the capture setup in order to generate binary video data, labeling the corpus by assigning a description to clips of silhouette video, processing the labeled corpus of silhouette motion data to extract horizontal and vertical projection histograms for each frame of silhouette data, and estimating the motion state automatically from each frame of segmentation data using the processed model. Virtual characters or objects are represented using video captured from video-based motion, thereby creating the illusion of real characters or objects in an interactive imaging experience.