Patents by Inventor Joseph Bertolami

Joseph Bertolami has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9761054
    Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: September 12, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
  • Patent number: 9313376
    Abstract: Disclosed herein are systems and methods to control the power consumption of a battery powered platform comprising at least one depth camera. The battery powered platform may adjust the consumption of one or more systems of the depth camera, or other systems on the battery powered platform to alter the power consumption of the battery powered platform.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: April 12, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph Bertolami, John Allen Tardif
  • Publication number: 20150235432
    Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.
    Type: Application
    Filed: May 4, 2015
    Publication date: August 20, 2015
    Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
  • Patent number: 9024972
    Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: May 5, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
  • Patent number: 8839121
    Abstract: Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system.
    Type: Grant
    Filed: May 6, 2009
    Date of Patent: September 16, 2014
    Inventors: Joseph Bertolami, Samuel A. Mann, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, Matthew W. Lee
  • Patent number: 8797321
    Abstract: A method and apparatus for rendering the lighting of virtual objects in an augmented reality display. The method includes determining local and ambient light sources based on data provided by one or more light sensors. The light in the physical lighting environment is accounted for by attributing the light to local light sources and/or ambient light sources. A synthesized physical lighting environment is constructed based on the light characteristics of the local and/or ambient light sources, and is used in properly rendering virtual objects in the augmented reality display.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: August 5, 2014
    Assignee: Microsoft Corporation
    Inventors: Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig
  • Patent number: 8775916
    Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: July 8, 2014
    Assignee: Microsoft Corporation
    Inventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Forbes, Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper, Michael Johnson
  • Publication number: 20130251204
    Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.
    Type: Application
    Filed: May 17, 2013
    Publication date: September 26, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Omega Forbes, JR., Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper, Michael Johnson
  • Patent number: 8448056
    Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.
    Type: Grant
    Filed: December 17, 2010
    Date of Patent: May 21, 2013
    Assignee: Microsoft Corporation
    Inventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Omega Forbes, Jr., Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper
  • Patent number: 8379057
    Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: February 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, John A. Tardif
  • Patent number: 8314832
    Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: November 20, 2012
    Assignee: Microsoft Corporation
    Inventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph Bertolami
  • Publication number: 20120223967
    Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.
    Type: Application
    Filed: May 14, 2012
    Publication date: September 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael Dougherty, Robert M. Craig, John A. Tardif
  • Publication number: 20120198531
    Abstract: One or more techniques and/or systems are disclosed for joining two or more devices in a multi-device communication session. A request is received from a first device, such as at a session hosting service on a remote server, to initiate a multi-device communication session, such on the session hosting service. A visual tag is sent to the first device, such as from the session service, where the visual tag comprises device-session pairing information, such as session service identification and session authorization. A multi-device communication session joining request is received from a second device, where the request from the second device comprises the device-session pairing information retrieved from the visual tag displayed by the first device, and captured by the second device.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Jeffrey Ort, Joseph Bertolami, Shyam Habarakada
  • Publication number: 20120159290
    Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Omega Forbes, JR., Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper
  • Patent number: 8194101
    Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: June 5, 2012
    Assignee: Microsoft Corporation
    Inventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, John A. Tardif
  • Publication number: 20100287485
    Abstract: Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system.
    Type: Application
    Filed: May 6, 2009
    Publication date: November 11, 2010
    Inventors: Joseph Bertolami, Samuel A. Mann, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, Matthew W. Lee
  • Publication number: 20100257252
    Abstract: Example embodiments of the present disclosure provide techniques for capturing and analyzing information gathered by a mobile device equipped with one or more sensors. Recognition and tracking software and localization techniques may be used to extrapolate pertinent information about the surrounding environment and transmit the information to a service that can analyze the transmitted information. In one embodiment, when a user views a particular object or landmark on a device with image capture capability, the device may be provided with information through a wireless connection via a database that may provide the user with rich metadata regarding the objects in view. Information may be presented through rendering means such as a web browser, rendered as a 2D overlay on top of the live image, and rendered in augmented reality.
    Type: Application
    Filed: April 1, 2009
    Publication date: October 7, 2010
    Applicant: Microsoft Corporation
    Inventors: Michael A. Dougherty, Samuel A. Mann, Matthew L. Bronder, Joseph Bertolami, Robert M. Craig
  • Publication number: 20100253766
    Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.
    Type: Application
    Filed: April 1, 2009
    Publication date: October 7, 2010
    Inventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph Bertolami