Patents by Inventor Shahram Izadi

Shahram Izadi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9857470
    Abstract: Detecting material properties such reflectivity, true color and other properties of surfaces in a real world environment is described in various examples using a single hand-held device. For example, the detected material properties are calculated using a photometric stereo system which exploits known relationships between lighting conditions, surface normals, true color and image intensity. In examples, a user moves around in an environment capturing color images of surfaces in the scene from different orientations under known lighting conditions. In various examples, surfaces normals of patches of surfaces are calculated using the captured data to enable fine detail such as human hair, netting, textured surfaces to be modeled. In examples, the modeled data is used to render images depicting the scene with realism or to superimpose virtual graphics on the real world in a realistic manner.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: January 2, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Otmar Hilliges, Malte Hanno Weiss, Shahram Izadi, David Kim, Carsten Curt Eckard Rother
  • Publication number: 20170372126
    Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.
    Type: Application
    Filed: September 11, 2017
    Publication date: December 28, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph SHOTTON, Cem KESKIN, Christoph RHEMANN, Toby SHARP, Duncan Paul ROBERTSON, Pushmeet KOHLI, Andrew William FITZGIBBON, Shahram IZADI
  • Patent number: 9851809
    Abstract: User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates.
    Type: Grant
    Filed: March 14, 2016
    Date of Patent: December 26, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harper LaFave, Stephen Hodges, James Scott, Shahram Izadi, David Molyneaux, Nicolas Villar, David Alexander Butler, Mike Hazas
  • Publication number: 20170285763
    Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.
    Type: Application
    Filed: June 14, 2017
    Publication date: October 5, 2017
    Inventors: David KIM, Shahram IZADI, Vivek PRADEEP, Steven BATHICHE, Timothy Andrew LARGE, Karlton David POWELL
  • Patent number: 9779508
    Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.
    Type: Grant
    Filed: March 26, 2014
    Date of Patent: October 3, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
  • Patent number: 9773155
    Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: September 26, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Christoph Rhemann, Toby Sharp, Duncan Paul Robertson, Pushmeet Kohli, Andrew William Fitzgibbon, Shahram Izadi
  • Publication number: 20170270390
    Abstract: Correspondences in content items may be determined using a trained decision tree to detect distinctive matches between portions of content items. The techniques described include determining a first group of patches associated with a first content item and processing a first patch based at least partly on causing the first patch to move through a decision tree, and determining a second group of patches associated with a second content item and processing a second patch based at least partly on causing the second patch to move through the decision tree. The techniques described include determining that the first patch and the second patch are associated with a same leaf node of the decision tree and determining that the first patch and the second patch are corresponding patches based at least partly on determining that the first patch and the second patch are associated with the same leaf node.
    Type: Application
    Filed: March 15, 2016
    Publication date: September 21, 2017
    Inventors: Sean Ryan Francesco Fanello, Shahram Izadi, Pushmeet Kohli, Christoph Rhemann, Shenlong Wang
  • Publication number: 20170236286
    Abstract: Techniques for determining depth for a visual content item using machine-learning classifiers include obtaining a visual content item of a reference light pattern projected onto an object, and determining shifts in locations of pixels relative to other pixels representing the reference light pattern. Disparity, and thus depth, for pixels may be determined by executing one or more classifiers trained to identify disparity for pixels based on the shifts in locations of the pixels relative to other pixels of a visual content item depicting in the reference light pattern. Disparity for pixels may be determined using a visual content item of a reference light pattern projected onto an object without having to match pixels between two visual content items, such as a reference light pattern and a captured visual content item.
    Type: Application
    Filed: March 15, 2016
    Publication date: August 17, 2017
    Inventors: Sean Ryan Francesco Fanello, Christoph Rhemann, Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, David KIM, Shahram Izadi
  • Patent number: 9734424
    Abstract: Filtering sensor data is described, for example, where filters conditioned on a local appearance of the signal are predicted by a machine learning system, and used to filter the sensor data. In various examples the sensor data is a stream of noisy video image data and the filtering process denoises the video stream. In various examples the sensor data is a depth image and the filtering process refines the depth image which may then be used for gesture recognition or other purposes. In various examples the sensor data is one dimensional measurement data from an electric motor and the filtering process denoises the measurements. In examples the machine learning system comprises a random decision forest where trees of the forest store filters at their leaves. In examples, the random decision forest is trained using a training objective with a data dependent regularization term.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: August 15, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Sean Ryan Francesco Fanello, Cem Keskin, Pushmeet Kohli, Shahram Izadi, Jamie Daniel Joseph Shotton, Antonio Criminisi
  • Patent number: 9729860
    Abstract: A depth-sensing method for a time-of-flight depth camera includes irradiating a subject with pulsed light of spatially alternating bright and dark features, and receiving the pulsed light reflected back from the subject onto an array of pixels. At each pixel of the array, a signal is presented that depends on distance from the depth camera to the subject locus imaged onto that pixel. In this method, the subject is mapped based on the signal from pixels that image subject loci directly irradiated by the bright features, while omitting or weighting negatively the signal from pixels that image subject loci under the dark features.
    Type: Grant
    Filed: May 24, 2013
    Date of Patent: August 8, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David Cohen, Giora Yahav, Asaf Pellman, Amir Nevet, Shahram Izadi
  • Patent number: 9720506
    Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: August 1, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Kim, Shahram Izadi, Vivek Pradeep, Steven Bathiche, Timothy Andrew Large, Karlton David Powell
  • Publication number: 20170199580
    Abstract: An augmented reality system which enables grasping of virtual objects is described such as to stack virtual cubes or to manipulate virtual objects in other ways. In various embodiments a user's hand or another real object is tracked in an augmented reality environment. In examples, the shape of the tracked real object is approximated using at least two different types of particles and the virtual objects are updated according to simulated forces exerted between the augmented reality environment and at least some of the particles. In various embodiments 3D positions of a first one of the types of particles, kinematic particles, are updated according to the tracked real object; and passive particles move with linked kinematic particles without penetrating virtual objects. In some examples a real-time optic flow process is used to track motion of the real object.
    Type: Application
    Filed: January 24, 2017
    Publication date: July 13, 2017
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, Malte Hanno Weiss
  • Patent number: 9703398
    Abstract: A pointing device using proximity sensing is described. In an embodiment, a pointing device comprises a movement sensor and a proximity sensor. The movement sensor generates a first data sequence relating to sensed movement of the pointing device relative to a surface. The proximity sensor generates a second data sequence relating to sensed movement relative to the pointing device of one or more objects in proximity to the pointing device. In embodiments, data from the movement sensor of the pointing device is read and the movement of the pointing device relative to the surface is determined. Data from the proximity sensor is also read, and a sequence of sensor images of one or more objects in proximity to the pointing device are generated. The sensor images are analyzed to determine the movement of the one or more objects relative to the pointing device.
    Type: Grant
    Filed: June 16, 2009
    Date of Patent: July 11, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Alexander Butler, Nicolas Villar, John Helmes, Shahram Izadi, Stephen E. Hodges, Daniel Rosenfeld, Hrvoje Benko
  • Patent number: 9697424
    Abstract: The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: July 4, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Oliver A. Whyte, Christoph Rhemann, Shahram Izadi
  • Patent number: 9690984
    Abstract: A signal encoding an infrared (IR) image including a plurality of IR pixels is received from an IR camera. Each IR pixel specifies one or more IR parameters of that IR pixel. IR-skin pixels that image a human hand are identified in the IR image. For each IR-skin pixel, a depth of a human hand portion imaged by that IR-skin pixel is estimated based on the IR parameters of that IR-skin pixel. A skeletal hand model including a plurality of hand joints is derived. Each hand joint is defined with three independent position coordinates inferred from the estimated depths of each human hand portion.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: June 27, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ben Butler, Vladimir Tankovich, Cem Keskin, Sean Ryan Francesco Fanello, Shahram Izadi, Emad Barsoum, Simon P. Stachniak, Yichen Wei
  • Patent number: 9665278
    Abstract: Assisting input from a keyboard is described. In an embodiment, a processor receives a plurality of key-presses from the keyboard comprising alphanumeric data for input to application software executed at the processor. The processor analyzes the plurality of key-presses to detect at least one predefined typing pattern, and, in response, controls a display device to display a representation of at least a portion of the keyboard in association with a user interface of the application software. In another embodiment, a computer device has a keyboard and at least one sensor arranged to monitor at least a subset of keys on the keyboard, and detect an object within a predefined distance of a selected key prior to activation of the selected key. The processor then controls the display device to display a representation of a portion of the keyboard comprising the selected key.
    Type: Grant
    Filed: February 26, 2010
    Date of Patent: May 30, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Scott, Shahram Izadi, Nicolas Villar, Ravin Balakrishnan
  • Publication number: 20170116471
    Abstract: Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body.
    Type: Application
    Filed: January 4, 2017
    Publication date: April 27, 2017
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Jonathan Taylor, Toby Sharp, Shahram Izadi, Andrew William Fitzgibbon, Pushmeet Kohli, Duncan Paul Robertson
  • Publication number: 20170109938
    Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.
    Type: Application
    Filed: December 26, 2016
    Publication date: April 20, 2017
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Moiyneaux, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 9613298
    Abstract: Tracking using sensor data is described, for example, where a plurality of machine learning predictors are used to predict a plurality of complementary, or diverse, parameter values of a process describing how the sensor data arises. In various examples a selector selects which of the predicted values are to be used, for example, to control a computing device. In some examples the tracked parameter values are pose of a moving camera or pose of an object moving in the field of view of a static camera; in some examples the tracked parameter values are of a 3D model of a hand or other articulated or deformable entity. The machine learning predictors have been trained in series, with training examples being reweighted after training an individual predictor, to favor training examples on which the set of predictors already trained performs poorly.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: April 4, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Abner Guzmán-Rivera, Pushmeet Kohli, Benjamin Michael Glocker, Jamie Daniel Joseph Shotton, Shahram Izadi, Toby Sharp, Andrew William Fitzgibbon
  • Publication number: 20170094008
    Abstract: A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.
    Type: Application
    Filed: December 14, 2016
    Publication date: March 30, 2017
    Inventors: Shahram Izadi, Behrooz Chitsaz