Patents by Inventor Vivek Pradeep

Vivek Pradeep has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9342147
    Abstract: Examples relating to using non-visual feedback to alert a viewer of a display that a visual change has been triggered are disclosed. One disclosed example provides a method comprising using gaze tracking data from a gaze tracking system to determine that a viewer changes a gaze location. Based on determining that the viewer changes the gaze location, a visual change is triggered and non-visual feedback indicating the triggering of the visual change is provided to the viewer. If a cancel change input is received within a predetermined timeframe, the visual change is not displayed. If a cancel change input is not received within the timeframe, the visual change is displayed via the display.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: May 17, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Ibrahim Eden, Vaibhav Thukral, David Nister, Vivek Pradeep
  • Patent number: 9330302
    Abstract: Embodiments that relate to determining gaze locations are disclosed. In one embodiment a method includes shining light along an outbound light path to the eyes of the user wearing glasses. Upon detecting the glasses, the light is dynamically polarized in a polarization pattern that switches between a random polarization phase and a single polarization phase, wherein the random polarization phase includes a first polarization along an outbound light path and a second polarization orthogonal to the first polarization along a reflected light path. The single polarization phase has a single polarization. During the random polarization phases, glares reflected from the glasses are filtered out and pupil images are captured. Glint images are captured during the single polarization phase. Based on pupil characteristics and glint characteristics, gaze locations are repeatedly detected.
    Type: Grant
    Filed: February 26, 2014
    Date of Patent: May 3, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vaibhav Thukral, Sudipta Sinha, Vivek Pradeep, Timothy Andrew Large, Nigel Stuart Keam, David Nister
  • Patent number: 9329727
    Abstract: Object detection techniques for use in conjunction with optical sensors is described. In one or more implementations, a plurality of inputs are received, each of the inputs being received from a respective one of a plurality of optical sensors. Each of the plurality of inputs are classified using machine learning as to whether the inputs are indicative of detection of an object by a respective said optical sensor.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: May 3, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Liang Wang, Sing Bing Kang, Jamie Daniel Joseph Shotton, Matheen Siddiqui, Vivek Pradeep, Steven Nabil Bathiche, Luis E. Cabrera-Cordon, Pablo Sala
  • Patent number: 9179021
    Abstract: Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared.
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: November 3, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Kenneth P. Hinckley, Kevin Geisner, Steven Nabil Bathiche, Hrvoje Benko, Vivek Pradeep
  • Publication number: 20150293587
    Abstract: Examples relating to using non-visual feedback to alert a viewer of a display that a visual change has been triggered are disclosed. One disclosed example provides a method comprising using gaze tracking data from a gaze tracking system to determine that a viewer changes a gaze location. Based on determining that the viewer changes the gaze location, a visual change is triggered and non-visual feedback indicating the triggering of the visual change is provided to the viewer. If a cancel change input is received within a predetermined timeframe, the visual change is not displayed. If a cancel change input is not received within the timeframe, the visual change is displayed via the display.
    Type: Application
    Filed: April 10, 2014
    Publication date: October 15, 2015
    Inventors: Weerapan Wilairat, Ibrahim Eden, Vaibhav Thukral, David Nister, Vivek Pradeep
  • Publication number: 20150279083
    Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.
    Type: Application
    Filed: March 26, 2014
    Publication date: October 1, 2015
    Inventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
  • Publication number: 20150271449
    Abstract: Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.
    Type: Application
    Filed: June 3, 2015
    Publication date: September 24, 2015
    Inventors: Vivek Pradeep, Stephen G. Latta, Steven Nabil Bathiche, Kevin Geisner, Alice Jane Bernheim Brush
  • Publication number: 20150242680
    Abstract: Embodiments that relate to determining gaze locations are disclosed. In one embodiment a method includes shining light along an outbound light path to the eyes of the user wearing glasses. Upon detecting the glasses, the light is dynamically polarized in a polarization pattern that switches between a random polarization phase and a single polarization phase, wherein the random polarization phase includes a first polarization along an outbound light path and a second polarization orthogonal to the first polarization along a reflected light path. The single polarization phase has a single polarization. During the random polarization phases, glares reflected from the glasses are filtered out and pupil images are captured. Glint images are captured during the single polarization phase. Based on pupil characteristics and glint characteristics, gaze locations are repeatedly detected.
    Type: Application
    Filed: February 26, 2014
    Publication date: August 27, 2015
    Inventors: Vaibhav Thukral, Sudipta Sinha, Vivek Pradeep, Timothy Andrew Large, Nigel Stuart Keam, David Nister
  • Publication number: 20150205445
    Abstract: Global and local light detection techniques in optical sensor systems are described. In one or more implementations, a global lighting value is generated that describes a global lighting level for a plurality of optical sensors based on a plurality of inputs received from the plurality of optical sensors. An illumination map is generated that describes local lighting conditions of respective ones of the plurality of optical sensors based on the plurality of inputs received from the plurality of optical sensors. Object detection is performed using an image captured using the plurality of optical sensors along with the global lighting value and the illumination map.
    Type: Application
    Filed: January 23, 2014
    Publication date: July 23, 2015
    Applicant: Microsoft Corporation
    Inventors: Vivek Pradeep, Liang Wang, Pablo Sala, Luis Eduardo Cabrera-Cordon, Steven Nabil Bathiche
  • Publication number: 20150199018
    Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.
    Type: Application
    Filed: January 14, 2014
    Publication date: July 16, 2015
    Inventors: David Kim, Shahram Izadi, Vivek Pradeep, Steven Bathiche, Timothy Andrew Large, Karlton David Powell
  • Patent number: 9077846
    Abstract: Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.
    Type: Grant
    Filed: February 6, 2012
    Date of Patent: July 7, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vivek Pradeep, Stephen G. Latta, Steven Nabil Bathiche, Kevin Geisner, Alice Jane Bernheim Brush
  • Publication number: 20150160785
    Abstract: Object detection techniques for use in conjunction with optical sensors is described. In one or more implementations, a plurality of inputs are received, each of the inputs being received from a respective one of a plurality of optical sensors. Each of the plurality of inputs are classified using machine learning as to whether the inputs are indicative of detection of an object by a respective said optical sensor.
    Type: Application
    Filed: December 11, 2013
    Publication date: June 11, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Liang Wang, Sing Bing Kang, Jamie Daniel Joseph Shotton, Matheen Siddiqui, Vivek Pradeep, Steven Nabil Bathiche, Luis E. Cabrera-Cordon, Pablo Sala
  • Publication number: 20150103011
    Abstract: A holographic interaction device is described. In one or more implementations, an input device includes an input portion comprising a plurality of controls that are configured to generate signals to be processed as inputs by a computing device that is communicatively coupled to the controls. The input device also includes a holographic recording mechanism disposed over a surface of the input portion, the holographic recording mechanism is configured to output a hologram in response to receipt of light, from a light source, that is viewable by a user over the input portion.
    Type: Application
    Filed: October 15, 2013
    Publication date: April 16, 2015
    Applicant: Microsoft Corporation
    Inventors: Timothy Andrew Large, Neil Emerton, Moshe R. Lutz, Vivek Pradeep, John G. A. Weiss, Quintus Travis
  • Patent number: 8855406
    Abstract: A system and method are disclosed for estimating camera motion of a visual input scene using points and lines detected in the visual input scene. The system includes a camera server comprising a stereo pair of calibrated cameras, a feature processing module, a trifocal motion estimation module and an optional adjustment module. The stereo pair of the calibrated cameras and its corresponding stereo pair of camera after camera motion form a first and a second trifocal tensor. The feature processing module is configured to detect points and lines in the visual input data comprising a plurality of image frames. The feature processing module is further configured to find point correspondence between detected points and line correspondence between detected lines in different views. The trifocal motion estimation module is configured to estimate the camera motion using the detected points and lines associated with the first and the second trifocal tensor.
    Type: Grant
    Filed: August 26, 2011
    Date of Patent: October 7, 2014
    Assignee: Honda Motor Co., Ltd.
    Inventors: Jongwoo Lim, Vivek Pradeep
  • Publication number: 20140132595
    Abstract: A display that renders realistic objects allows a designer to redesign a living space in real time based on an existing layout. A computer system renders simulated objects on the display such that the simulated objects appear to the viewer to be in substantially the same place as actual objects in the scene. The displayed simulated objects can be spatially manipulated on the display through various user gestures. A designer can visually simulate a redesign of the space in many ways, for example, by adding selected objects, or by removing or rearranging existing objects, or by changing properties of those objects. Such objects also can be associated with shopping resources to enable related goods and services to be purchased, or other commercial transactions to be engaged in.
    Type: Application
    Filed: November 14, 2012
    Publication date: May 15, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Catherine N. Boulanger, Matheen Siddiqui, Vivek Pradeep, Paul Dietz, Steven Bathiche
  • Publication number: 20130286223
    Abstract: Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared.
    Type: Application
    Filed: April 25, 2012
    Publication date: October 31, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen G. Latta, Kenneth P. Hinckley, Kevin Geisner, Steven Nabil Bathiche, Hrvoje Benko, Vivek Pradeep
  • Publication number: 20130201276
    Abstract: Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.
    Type: Application
    Filed: February 6, 2012
    Publication date: August 8, 2013
    Applicant: Microsoft Corporation
    Inventors: Vivek Pradeep, Stephen G. Latta, Steven Nabil Bathiche, Kevin Geisner, Alice Jane Bernheim Brush
  • Publication number: 20130201095
    Abstract: Techniques involving presentations are described. In one or more implementations, a user interface is output by a computing device that includes a slide of a presentation, the slide having an object that is output for display in three dimensions. Responsive to receipt of one or more inputs by the computing device, how the object in the slide is output for display in the three dimensions is altered.
    Type: Application
    Filed: February 7, 2012
    Publication date: August 8, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Paul Henry Dietz, Vivek Pradeep, Stephen G. Latta, Kenneth P. Hinckley, Hrvoje Benko, Alice Jane Bernheim Brush
  • Publication number: 20130131985
    Abstract: The system comprises a wearable, electronic image acquisition and processing system (or visual enhancement system) to guide visually impaired individuals through their environment, providing information to the user about nearby objects of interest, potentially dangerous obstacles, their location, and potential paths to their destination.
    Type: Application
    Filed: April 11, 2012
    Publication date: May 23, 2013
    Inventors: James D. Weiland, Mark S. Humayan, Gerard Medioni, Armand R. Tanguay, JR., Vivek Pradeep, Laurent Itti
  • Publication number: 20120063638
    Abstract: A system and method are disclosed for estimating camera motion of a visual input scene using points and lines detected in the visual input scene. The system includes a camera server comprising a stereo pair of calibrated cameras, a feature processing module, a trifocal motion estimation module and an optional adjustment module. The stereo pair of the calibrated cameras and its corresponding stereo pair of camera after camera motion form a first and a second trifocal tensor. The feature processing module is configured to detect points and lines in the visual input data comprising a plurality of image frames. The feature processing module is further configured to find point correspondence between detected points and line correspondence between detected lines in different views. The trifocal motion estimation module is configured to estimate the camera motion using the detected points and lines associated with the first and the second trifocal tensor.
    Type: Application
    Filed: August 26, 2011
    Publication date: March 15, 2012
    Applicant: HONDA MOTOR CO., LTD.
    Inventors: Jongwoo Lim, Vivek Pradeep