Patents by Inventor Shahram Izadi

Shahram Izadi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150082068
    Abstract: A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.
    Type: Application
    Filed: November 24, 2014
    Publication date: March 19, 2015
    Inventors: Shahram Izadi, Behrooz Chitsaz
  • Patent number: 8982051
    Abstract: Embodiments are disclosed herein that are related to input devices with curved multi-touch surfaces. One disclosed embodiment comprises a touch-sensitive input device having a curved geometric feature comprising a touch sensor, the touch sensor comprising an array of sensor elements integrated into the curved geometric feature and being configured to detect a location of a touch made on a surface of the curved geometric feature.
    Type: Grant
    Filed: June 19, 2009
    Date of Patent: March 17, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Rosenfeld, Jonathan Westhues, Shahram Izadi, Nicolas Villar, Hrvoje Benko, John Helmes, Kurt Allen Jenkins
  • Patent number: 8971612
    Abstract: Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: March 3, 2015
    Assignee: Microsoft Corporation
    Inventors: Jamie Daniel Joseph Shotton, Pushmeet Kohli, Stefan Johannes Josef Holzer, Shahram Izadi, Carsten Curt Eckard Rother, Sebastian Nowozin, David Kim, David Molyneaux, Otmar Hilliges
  • Publication number: 20150043770
    Abstract: Speckle sensing for motion tracking is described, for example, to track a user's finger or head in an environment to control a graphical user interface, to track a hand-held device, to track digits of a hand for gesture-based control, and to track 3D motion of other objects or parts of objects in a real-world environment. In various examples a stream of images of a speckle pattern from at least one coherent light source illuminating the object, or which is generated by a light source at the object to be tracked, is used to compute an estimate of 3D position of the object. In various examples the estimate is transformed using information about position and/or orientation of the object from another source. In various examples the other source is a time of flight system, a structured light system, a stereo system, a sensor at the object, or other sources.
    Type: Application
    Filed: August 9, 2013
    Publication date: February 12, 2015
    Inventors: Nicholas Yen-Cherng Chen, Stephen Edward Hodges, Andrew William Fitzgibbon, Andrew Clark Goris, Brian Lee Hastings, Shahram Izadi
  • Patent number: 8948184
    Abstract: A modular development platform is described which enables creation of reliable, compact, physically robust and power efficient embedded device prototypes. The platform consists of a base module which holds a processor and one or more peripheral modules each having an interface element. The base module and the peripheral modules may be electrically and/or physically connected together. The base module communicates with peripheral modules using packets of data with an addressing portion which identifies the peripheral module that is the intended recipient of the data packet.
    Type: Grant
    Filed: April 4, 2012
    Date of Patent: February 3, 2015
    Assignee: Microsoft Corporation
    Inventors: Stephen E. Hodges, David Alexander Butler, Shahram Izadi, Chih-Chieh Han
  • Patent number: 8933931
    Abstract: A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: January 13, 2015
    Assignee: Microsoft Corporation
    Inventors: Alexandru Balan, Jason Flaks, Steve Hodges, Michael Isard, Oliver Williams, Paul Barham, Shahram Izadi, Otmar Hiliges, David Molyneaux, David Kim
  • Publication number: 20140368504
    Abstract: Scalable volumetric reconstruction is described whereby data from a mobile environment capture device is used to form a 3D model of a real-world environment. In various examples, a hierarchical structure is used to store the 3D model where the structure comprises a root level node, a plurality of interior level nodes and a plurality of leaf nodes, each of the nodes having an associated voxel grid representing a portion of the real world environment, the voxel grids being of finer resolution at the leaf nodes than at the root node. In various examples, parallel processing is used to enable captured data to be integrated into the 3D model and/or to enable images to be rendered from the 3D model. In an example, metadata is computed and stored in the hierarchical structure and used to enable space skipping and/or pruning of the hierarchical structure.
    Type: Application
    Filed: June 12, 2013
    Publication date: December 18, 2014
    Inventors: Jiawen Chen, Dennis Bautembach, Shahram Izadi
  • Publication number: 20140347443
    Abstract: A depth-sensing method for a time-of-flight depth camera includes irradiating a subject with pulsed light of spatially alternating bright and dark features, and receiving the pulsed light reflected back from the subject onto an array of pixels. At each pixel of the array, a signal is presented that depends on distance from the depth camera to the subject locus imaged onto that pixel. In this method, the subject is mapped based on the signal from pixels that image subject loci directly irradiated by the bright features, while omitting or weighting negatively the signal from pixels that image subject loci under the dark features.
    Type: Application
    Filed: May 24, 2013
    Publication date: November 27, 2014
    Inventors: David Cohen, Giora Yahav, Asaf Pellman, Amir Nevet, Shahram Izadi
  • Patent number: 8898398
    Abstract: A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.
    Type: Grant
    Filed: March 9, 2010
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, Behrooz Chitsaz
  • Publication number: 20140327637
    Abstract: Methods and devices for providing a user input to a device through sensing of user-applied forces are described. A user applies forces to a rigid body as if to deform it and these applied forces are detected by force sensors in or on the rigid body. The resultant force on the rigid body is determined from the sensor data and this resultant force is used to identify a user input. In an embodiment, the user input may be a user input to a software program running on the device. In an embodiment the rigid body is the rigid case of a computing device which includes a display and which is running the software program.
    Type: Application
    Filed: July 15, 2014
    Publication date: November 6, 2014
    Inventors: James W. Scott, Shahram Izadi, Stephen E. Hodges, Daniel A. Rosenfeld, Michael G. Molloy
  • Publication number: 20140307058
    Abstract: The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output.
    Type: Application
    Filed: June 24, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Oliver A. Whyte, Sing Bing Kang, Charles Lawrence Zitnick, III, Richard S. Szeliski, Shahram Izadi, Christoph Rhemann, Andreas Georgiou, Avronil Bhattacharjee
  • Publication number: 20140307047
    Abstract: The subject disclosure is directed towards stereo matching based upon active illumination, including using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images. To correlate pixels in actively illuminated stereo images, adaptive support weights computations may be used to determine similarity of patches corresponding to the pixels. In order to obtain meaningful adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image.
    Type: Application
    Filed: June 21, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang
  • Publication number: 20140307057
    Abstract: The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.
    Type: Application
    Filed: June 21, 2013
    Publication date: October 16, 2014
    Inventors: Sing Bing Kang, Shahram Izadi
  • Publication number: 20140307953
    Abstract: The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination.
    Type: Application
    Filed: June 21, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Oliver A. Whyte, Christoph Rhemann, Shahram Izadi
  • Publication number: 20140297014
    Abstract: The subject disclosure is directed towards three-dimensional object fabrication using an implicit surface representation as a model for surface geometries. A voxelized space for the implicit surface representation, of which each machine addressable unit includes indirect surface data, may be used to control components of an apparatus when that apparatus fabricates a three-dimensional object. Instructions generated using this representation may cause these components to move to surface positions and deposit source material.
    Type: Application
    Filed: June 24, 2013
    Publication date: October 2, 2014
    Inventors: Kristofer N. Iverson, Christopher C. White, Yulin Jin, Jesse D. McGatha, Shahram Izadi
  • Publication number: 20140247212
    Abstract: In one or more implementations, a static geometry model is generated, from one or more images of a physical environment captured using a camera, using one or more static objects to model corresponding one or more objects in the physical environment. Interaction of a dynamic object with at least one of the static objects is identified by analyzing at least one image and a gesture is recognized from the identified interaction of the dynamic object with the at least one of the static objects to initiate an operation of the computing device.
    Type: Application
    Filed: May 16, 2014
    Publication date: September 4, 2014
    Applicant: Microsoft Corporation
    Inventors: David Kim, Otmar D. Hilliges, Shahram Izadi, Patrick L. Olivier, Jamie Daniel Joseph Shotton, Pushmeet Kohli, David G. Molyneaux, Stephen E. Hodges, Andrew W. Fitzgibbon
  • Publication number: 20140247263
    Abstract: A steerable display system includes a projector and a projector steering mechanism that selectively changes a projection direction of the projector. An aiming controller causes the projector steering mechanism to aim the projector at a target location of a physical environment. An image controller supplies the aimed projector with information for projecting an image that is geometrically corrected for the target location.
    Type: Application
    Filed: September 26, 2013
    Publication date: September 4, 2014
    Applicant: Microsoft Corporation
    Inventors: Andrew Wilson, Hrvoje Benko, Shahram Izadi
  • Publication number: 20140241612
    Abstract: Real-time stereo matching is described, for example, to find depths of objects in an environment from an image capture device capturing a stream of stereo images of the objects. For example, the depths may be used to control augmented reality, robotics, natural user interface technology, gaming and other applications. Streams of stereo images, or single stereo images, obtained with or without patterns of illumination projected onto the environment are processed using a parallel-processing unit to obtain depth maps. In various embodiments a parallel-processing unit propagates values related to depth in rows or columns of a disparity map in parallel. In examples, the values may be propagated according to a measure of similarity between two images of a stereo pair; propagation may be temporal between disparity maps of frames of a stream of stereo images and may be spatial within a left or right disparity map.
    Type: Application
    Filed: February 23, 2013
    Publication date: August 28, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Christoph Rhemann, Carsten Curt Eckard Rother, Christopher Zach, Shahram Izadi, Adam Garnet Kirk, Oliver Whyte, Michael Bleyer
  • Publication number: 20140241617
    Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.
    Type: Application
    Filed: February 22, 2013
    Publication date: August 28, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
  • Patent number: 8803797
    Abstract: Methods and devices for providing a user input to a device through sensing of user-applied forces are described. A user applies forces to a rigid body as if to deform it and these applied forces are detected by force sensors in or on the rigid body. The resultant force on the rigid body is determined from the sensor data and this resultant force is used to identify a user input. In an embodiment, the user input may be a user input to a software program running on the device. In an embodiment the rigid body is the rigid case of a computing device which includes a display and which is running the software program.
    Type: Grant
    Filed: January 18, 2008
    Date of Patent: August 12, 2014
    Assignee: Microsoft Corporation
    Inventors: James W. Scott, Shahram Izadi, Stephen E. Hodges, Daniel A Rosenfeld, Michael G. Molloy