Abstract: An image sensor frame rate can be increased by “interlaced” mode operation whereby only half the number of lines of an image is transported to the readout circuitry. This halves the integration time but also halves the resolution of the sensor. Accordingly, in one embodiment, an image sensor operated in an interlaced fashion is first exposed to a scene under a first form of illumination (e.g., narrowband illumination), and a first set of alternating (horizontal or vertical) lines constituting half of the pixels is read out of the array; the sensor is then exposed to the same scene under a second form of illumination (e.g., existing ambient illumination with the illumination source turned off), and a second set of alternating lines, representing the other half of the pixel array, is read out. The two images are compared and noise removed from the image obtained under narrowband illumination.
Abstract: An image sensor frame rate can be increased by “interlaced” mode operation whereby only half the number of lines (alternating between odd and even lines) of an image is transported to the readout circuitry. This halves the integration time but also halves the resolution of the sensor. The reduction is tolerable for motion characterization as long as sufficient image resolution remains. Accordingly, in one embodiment, an image sensor operated in an interlaced fashion is first exposed to a scene under a first illumination form (e.g., narrowband illumination), and a first set of alternating (horizontal or vertical) lines constituting half of the pixels is read out of the array; the sensor is then exposed to the same scene under a second illumination form (e.g., existing ambient illumination with the illumination source turned off), and a second set of alternating lines, representing the other half of the pixel array, is read out.
Abstract: First and second detection systems coupled to a controller are synchronized, with the first detection system including first emission and detection modules while the second detection system includes a second detection module, for emitting radiation towards and detecting radiation from a region. A pulse of radiation emitted from the first emission module is detected by the first and second detection modules for a first time interval starting at time T1 and for a second time interval starting at time T2, respectively. The radiation received is compared to determine a radiation difference measurement. The starting time T2 is adjusted relative to starting time T1 based at least in part upon the radiation difference measurement to determine a revised starting time T2, thereby aiding the synchronization of starting time T2 with starting time T1.
Type:
Grant
Filed:
January 24, 2014
Date of Patent:
May 24, 2016
Assignee:
Leap Motion, Inc.
Inventors:
Ryan Christopher Julian, Hongyuan He, David Samuel Holz
Abstract: Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.
Abstract: Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.
Abstract: Imaging systems and methods optimize illumination of objects for purposes of detection, recognition and/or tracking by tailoring the illumination to the position of the object within the detection space. For example, feedback from a tracking system may be used to control and aim the lighting elements so that the illumination can be reduced or increased depending on the need.
Abstract: The technology disclosed relates to highly functional/highly accurate motion sensory control devices for use in automotive and industrial control systems capable of capturing and providing images to motion capture systems that detect gestures in a three dimensional (3D) sensory space.
Type:
Application
Filed:
August 13, 2015
Publication date:
February 18, 2016
Applicant:
LEAP MOTION, INC.
Inventors:
David S. Holz, Justin Schunick, Neeloy Roy, Chen Zheng, Ward Travis
Abstract: The technology disclosed relates to a motion sensory and imaging device capable of acquiring imaging information of the scene and providing at least a near real time pass-through of imaging information to a user. The sensory and imaging device can be used stand-alone or coupled to a wearable or portable device to create a wearable sensory system capable of presenting to the wearer the imaging information augmented with virtualized or created presentations of information.
Abstract: The technology disclosed relates to selecting among devices room to interact with. It also relates operating a smart phone with reduced power consumption. It further relates to gesturally interacting with devices that lack gestural responsiveness. The technology disclosed also relates to distinguishing control gestures from proximate non-control gestures in a pervasive three dimensional (3D) sensory space. The technology disclosed further relates to selecting among virtual interaction modalities to interact with.
Type:
Application
Filed:
February 19, 2015
Publication date:
December 3, 2015
Applicant:
Leap Motion, Inc.
Inventors:
Robert Samuel Gordon, Maxwell Sills, Paul Alan Durdik
Abstract: The technology disclosed can provide capabilities such as calibrating an imaging device based on images taken by device cameras of reflections of the device itself. Implementations exploit device components that are easily recognizable in the images, such as one or more light-emitting devices (LEDs) or other light sources to eliminate the need for specialized calibration hardware and can be accomplished, instead, with hardware readily available to a user of the device—the device itself and a reflecting surface, such as a computer screen. The user may hold the device near the screen under varying orientations and capture a series of images of the reflection with the device's cameras. These images are analyzed to determine camera parameters based on the known positions of the light sources. If the positions of the light sources themselves are subject to errors requiring calibration, they may be solved for as unknowns in the analysis.
Abstract: Disclosed are a system and a device including a motion capture device and an input device, hereafter referred to as a stylus, which has additional functionality. The motion capture device detects the motion of the stylus and the detected motion is used as an input to a computer system. The system is able to differentiate identical movements of a stylus as different inputs by varying a detectable property of the stylus. The stylus may exhibit a variable reflective property that is detectable by the motion capture device. The variable reflective property gives the stylus additional functionality with an extended vocabulary. The extended vocabulary includes supplemental information and/or instructions detected by the motion capture device.
Type:
Application
Filed:
May 21, 2014
Publication date:
November 26, 2015
Applicant:
Leap Motion, Inc.
Inventors:
David Samuel HOLZ, Kevin A. Horowitz, Justin Schunick
Abstract: The technology disclosed relates to providing devices and methods for attaching motion capture devices to head mounted displays (HMDs) using existing features of the HMDS, with no modification to the design of the HMDs. A motion capture device is attached with an adapter to a wearable device that can be a personal HMD having a goggle form factor. The motion capture device is operable to be attached to or detached from an adapter, and the adapter is operable to be attached to or detached from an HMD. The motion capture device is attached to the HMD with an adapter in a fixed position and orientation. In embodiments, the attachment mechanism coupling the adapter to the HMD utilizes existing functional or ornamental elements of an HMD. Functional or ornamental elements of the HMD include; air vents, bosses, grooves, recessed channels, slots formed where two parts connect, openings for head straps etc.
Abstract: The technology disclosed relates to providing devices and methods for attaching motion capture devices to head mounted displays (HMDs) using existing features of the HMDS, with no modification to the design of the HMDs. A motion capture device is attached with an adapter to a wearable device that can be a personal HMD having a goggle form factor. The motion capture device is operable to be attached to or detached from an adapter, and the adapter is operable to be attached to or detached from an HMD. The motion capture device is attached to the HMD with an adapter in a fixed position and orientation. In embodiments, the attachment mechanism coupling the adapter to the HMD utilizes existing functional or ornamental elements of an HMD. Functional or ornamental elements of the HMD include; air vents, bosses, grooves, recessed channels, slots formed where two parts connect, openings for head straps etc.
Abstract: The technology disclosed relates to tracking movement of a real world object in three-dimensional (3D) space. In particular, it relates to mapping, to image planes of a camera, projections of observation points on a curved volumetric model of the real world object. The projections are used to calculate a retraction of the observation points at different times during which the real world object has moved. The retraction is then used to determine translational and rotational movement of the real world object between the different times.
Abstract: Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby.
Abstract: Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof.
Abstract: System and methods for locating objects within a region of interest involve, in various embodiments, scanning the region with light of temporally variable direction and detecting reflections of objects therein; positional information about the objects can then be inferred from the resulting reflections.
Abstract: Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof.
Abstract: Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof.