Patents by Inventor Jean-Yves Bouguet

Jean-Yves Bouguet has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11244649
    Abstract: Techniques are described for calibrating a device having a first sensor and a second sensor. Techniques include capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. Techniques include determining a calibration level associated with the calibration profile at a first time. Techniques include determining, based on the calibration level, to perform a calibration process. Techniques include performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: February 8, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Yu-Tseh Chi, Jean-Yves Bouguet, Divya Sharma, Lei Huang, Dennis William Strelow, Etienne Gregoire Grossmann, Evan Gregory Levine, Adam Harmat, Ashwin Swaminathan
  • Publication number: 20210118401
    Abstract: Techniques are described for calibrating a device having a first sensor and a second sensor. Techniques include capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. Techniques include determining a calibration level associated with the calibration profile at a first time. Techniques include determining, based on the calibration level, to perform a calibration process. Techniques include performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.
    Type: Application
    Filed: November 3, 2020
    Publication date: April 22, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Yu-Tseh CHI, Jean-Yves BOUGUET, Divya SHARMA, Lei HUANG, Dennis William STRELOW, Etienne Gregoire GROSSMANN, Evan Gregory LEVINE, Adam HARMAT, Ashwin SWAMINATHAN
  • Patent number: 10854165
    Abstract: A method for calibrating a device having a first sensor and a second sensor. The method includes capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. The method also includes determining a calibration level associated with the calibration profile at a first time. The method further includes determining, based on the calibration level, to perform a calibration process. The method further includes performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: December 1, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Yu-Tseh Chi, Jean-Yves Bouguet, Divya Sharma, Lei Huang, Dennis William Strelow, Etienne Gregoire Grossmann, Evan Gregory Levine, Adam Harmat, Ashwin Swaminathan
  • Patent number: 10592744
    Abstract: A system and method is provided for determining the location of a device based on image of objects captured by the device. In one aspect, an interior space includes a plurality of objects having discernable visual characteristics disposed throughout the space. The device captures an image containing one or more of the objects and identifies the portions of the image associated with the objects based on the visual characteristics. The visual appearance of the objects may also be used to determine the distance of the object to other objects or relative to a reference point. Based on the foregoing and the size and shape of the image portion occupied by the object, such as the height of an edge or its surface area, relative to another object or a reference, the device may calculate its location.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: March 17, 2020
    Assignee: Google LLC
    Inventors: Ehud Rivlin, Brian McClendon, Jean-Yves Bouguet
  • Publication number: 20190197982
    Abstract: A method for calibrating a device having a first sensor and a second sensor. The method includes capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. The method also includes determining a calibration level associated with the calibration profile at a first time. The method further includes determining, based on the calibration level, to perform a calibration process. The method further includes performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 27, 2019
    Applicant: Magic Leap, Inc.
    Inventors: Yu-Tseh CHI, Jean-Yves BOUGUET, Divya SHARMA, Lei HUANG, Dennis William STRELOW, Etienne Gregoire GROSSMANN, Evan Gregory LEVINE, Adam HARMAT, Ashwin SWAMINATHAN
  • Patent number: 9965682
    Abstract: A system and method is provided for determining the location of a device based on image of objects captured by the device. In one aspect, an interior space includes a plurality of objects having discernable visual characteristics disposed throughout the space. The device captures an image containing one or more of the objects and identifies the portions of the image associated with the objects based on the visual characteristics. The visual appearance of the objects may also be used to determine the distance of the object to other objects or relative to a reference point. Based on the foregoing and the size and shape of the image portion occupied by the object, such as the height of an edge or its surface area, relative to another object or a reference, the device may calculate its location.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: May 8, 2018
    Assignee: Google LLC
    Inventors: Ehud Rivlin, Brian McClendon, Jean-Yves Bouguet
  • Patent number: 9400921
    Abstract: A method and system using a data-driven model for monocular face tracking are disclosed, which provide a versatile system for tracking three-dimensional (3D) images, e.g., a face, using a single camera. For one method, stereo data based on input image sequences is obtained. A 3D model is built using the obtained stereo data. A monocular image sequence is tracked using the built 3D model. Principal Component Analysis (PCA) can be applied to the stereo data to learn, e.g., possible facial deformations, and to build a data-driven 3D model (“3D face model”). The 3D face model can be used to approximate a generic shape (e.g., facial pose) as a linear combination of shape basis vectors based on the PCA analysis.
    Type: Grant
    Filed: May 9, 2001
    Date of Patent: July 26, 2016
    Assignee: Intel Corporation
    Inventors: Jean-Yves Bouguet, Radek Grzeszczuk, Salih Gokturk
  • Patent number: 9189853
    Abstract: Methods and systems for automatically generating pose estimates from uncalibrated unordered panoramas are provided. An exemplary method of automatically generating pose estimates includes receiving a plurality of uncalibrated and unordered panoramic images that include at least one interior building image, and extracting, for each panoramic image, feature points. The method includes generating a match matrix for all the panoramic images based on the one or more feature points, constructing a minimal spanning tree based on the match matrix, identifying a first and second panoramic image, based on the minimal spanning tree, wherein the second panoramic image is associated with the first panoramic image providing a navigation from the first panoramic image to the second panoramic image.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: November 17, 2015
    Assignee: Google Inc.
    Inventors: Mohamed Aly, Jean-Yves Bouguet
  • Patent number: 9098905
    Abstract: A system and method is provided for determining the location of a device based on image of objects captured by the device. In one aspect, an interior space includes a plurality of objects having discernable visual characteristics disposed throughout the space. The device captures an image containing one or more of the objects and identifies the portions of the image associated with the objects based on the visual characteristics. The visual appearance of the objects may also be used to determine the distance of the object to other objects or relative to a reference point. Based on the foregoing and the size and shape of the image portion occupied by the object, such as the height of an edge or its surface area, relative to another object or a reference, the device may calculate its location.
    Type: Grant
    Filed: March 12, 2010
    Date of Patent: August 4, 2015
    Assignee: Google Inc.
    Inventors: Ehud Rivlin, Brian McClendon, Jean-Yves Bouguet
  • Publication number: 20150178565
    Abstract: A system and method is provided for determining the location of a device based on image of objects captured by the device. In one aspect, an interior space includes a plurality of objects having discernable visual characteristics disposed throughout the space. The device captures an image containing one or more of the objects and identifies the portions of the image associated with the objects based on the visual characteristics. The visual appearance of the objects may also be used to determine the distance of the object to other objects or relative to a reference point. Based on the foregoing and the size and shape of the image portion occupied by the object, such as the height of an edge or its surface area, relative to another object or a reference, the device may calculate its location.
    Type: Application
    Filed: March 12, 2010
    Publication date: June 25, 2015
    Applicant: Google Inc.
    Inventors: Ehud Rivlin, Brian McClendon, Jean-Yves Bouguet
  • Patent number: 8958661
    Abstract: Methods and apparatus to generate templates from web images for searching an image database are described. In one embodiment, one or more retrieved images (e.g., from the Web) may be used to generate one or more templates. The templates may be used to search an image database based on features commonly shared between sub-images of the retrieved images. Other embodiments are also described.
    Type: Grant
    Filed: March 30, 2007
    Date of Patent: February 17, 2015
    Assignee: Intel Corporation
    Inventors: Navneet Panda, Yi Wu, Jean-Yves Bouguet, Ara Nefian
  • Patent number: 8787700
    Abstract: Methods and systems for automatically generating pose estimates from uncalibrated unordered panoramas are provided. An exemplary method of automatically generating pose estimates includes receiving a plurality of uncalibrated and unordered panoramic images that include at least one interior building image, and extracting, for each panoramic image, feature points. The method includes generating a match matrix for all the panoramic images based on the one or more feature points, constructing a minimal spanning tree based on the match matrix, identifying a first and second panoramic image, based on the minimal spanning tree, wherein the second panoramic image is associated with the first panoramic image providing a navigation from the first panoramic image to the second panoramic image.
    Type: Grant
    Filed: November 30, 2011
    Date of Patent: July 22, 2014
    Assignee: Google Inc.
    Inventors: Mohamed Aly, Jean-Yves Bouguet
  • Patent number: 8565537
    Abstract: A processing system may receive an example image for use in querying a collection of digital images. The processing system may use local and global feature descriptors to perform a content-based image comparison of the digital images with the example image, to automatically rank the digital images with respect to similarity to the example image. A local feature descriptor may represent a portion of the contents of a digital image. A global feature descriptor may represent substantially all of the contents of that digital image. The global feature descriptor may be content based, not keyword based. Intermediate and final classifiers may be used to perform the automatic ranking. Different intermediate classifiers may generate intermediate relevance metrics with respect to different modalities. The final classifier may use results from the intermediate classifiers to produce a final relevance metric for the digital images. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: October 22, 2013
    Assignee: Intel Corporation
    Inventors: Jean-Yves Bouguet, Carole Dulong, Igor V. Kozintsev, Yi Wu, Ara Nefian
  • Publication number: 20120173549
    Abstract: A processing system may receive an example image for use in querying a collection of digital images. The processing system may use local and global feature descriptors to perform a content-based image comparison of the digital images with the example image, to automatically rank the digital images with respect to similarity to the example image. A local feature descriptor may represent a portion of the contents of a digital image. A global feature descriptor may represent substantially all of the contents of that digital image. The global feature descriptor may be content based, not keyword based. Intermediate and final classifiers may be used to perform the automatic ranking. Different intermediate classifiers may generate intermediate relevance metrics with respect to different modalities. The final classifier may use results from the intermediate classifiers to produce a final relevance metric for the digital images. Other embodiments are described and claimed.
    Type: Application
    Filed: March 15, 2012
    Publication date: July 5, 2012
    Inventors: Jean-Yves Bouguet, Carole Dulong, Igor V. Kozintsev, Yi Wu, Ara V. Nefian
  • Patent number: 8200027
    Abstract: An image retrieval program (IRP) may be used to query a collection of digital images. The IRP may include a mining module to use local and global feature descriptors to automatically rank the digital images in the collection with respect to similarity to a user-selected positive example. Each local feature descriptor may represent a portion of an image based on a division of that image into multiple portions. Each global feature descriptor may represent an image as a whole. A user interface module of the IRP may receive input that identifies an image as the positive example. The user interface module may also present images from the collection in a user interface in a ranked order with respect to similarity to the positive example, based on results of the mining module. Query concepts may be saved and reused. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: June 12, 2012
    Assignee: Intel Corporation
    Inventors: Jean-Yves Bouguet, Carole Dulong, Igor V. Kozintsev, Yi Wu, Ara V. Nefian
  • Publication number: 20110081090
    Abstract: An image retrieval program (IRP) may be used to query a collection of digital images. The IRP may include a mining module to use local and global feature descriptors to automatically rank the digital images in the collection with respect to similarity to a user-selected positive example. Each local feature descriptor may represent a portion of an image based on a division of that image into multiple portions. Each global feature descriptor may represent an image as a whole. A user interface module of the IRP may receive input that identifies an image as the positive example. The user interface module may also present images from the collection in a user interface in a ranked order with respect to similarity to the positive example, based on results of the mining module. Query concepts may be saved and reused. Other embodiments are described and claimed.
    Type: Application
    Filed: November 23, 2010
    Publication date: April 7, 2011
    Inventors: Jean-Yves Bouguet, Carole Dulong, Igor V. Kozintsev, Yi Wu, Ara V. Nefian
  • Patent number: 7840076
    Abstract: An image retrieval program (IRP) may be used to query a collection of digital images. The IRP may include a mining module to use local and global feature descriptors to automatically rank the digital images in the collection with respect to similarity to a user-selected positive example. Each local feature descriptor may represent a portion of an image based on a division of that image into multiple portions. Each global feature descriptor may represent an image as a whole. A user interface module of the IRP may receive input that identifies an image as the positive example. The user interface module may also present images from the collection in a user interface in a ranked order with respect to similarity to the positive example, based on results of the mining module. Query concepts may be saved and reused. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 22, 2006
    Date of Patent: November 23, 2010
    Assignee: Intel Corporation
    Inventors: Jean-Yves Bouguet, Carole Dulong, Igor V. Kozintsev, Yi Wu, Ara V. Nefian
  • Patent number: 7739662
    Abstract: Methods and apparatus are disclosed to analyze processor system. An example method to analyze execution of a multi-threaded program on a processor system includes generating a first program trace associated with the execution of a first thread, generating a first list of execution frequencies associated with the first program trace, generating a second program trace associated with the execution of a second thread, generating a second list of execution frequencies associated with the second trace, generating a first set of one or more vectors for the first list of execution frequencies, generating a second set of one or more vectors for the second list of execution frequencies, and analyzing the one or more vectors to identify one or more program phases.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: June 15, 2010
    Assignee: Intel Corporation
    Inventors: Jean-Yves Bouguet, Marzia Polito, Carole Dulong, Erez Perelman
  • Publication number: 20080240575
    Abstract: Methods and apparatus to generate templates from web images for searching an image database are described. In one embodiment, one or more retrieved images (e.g., from the Web) may be used to generate one or more templates. The templates may be used to search an image database based on features commonly shared between sub-images of the retrieved images. Other embodiments are also described.
    Type: Application
    Filed: March 30, 2007
    Publication date: October 2, 2008
    Inventors: Navneet Panda, Yi Wu, Jean-Yves Bouguet, Ara Nefian
  • Publication number: 20080118151
    Abstract: An image retrieval program (IRP) may be used to query a collection of digital images. The IRP may include a mining module to use local and global feature descriptors to automatically rank the digital images in the collection with respect to similarity to a user-selected positive example. Each local feature descriptor may represent a portion of an image based on a division of that image into multiple portions. Each global feature descriptor may represent an image as a whole. A user interface module of the IRP may receive input that identifies an image as the positive example. The user interface module may also present images from the collection in a user interface in a ranked order with respect to similarity to the positive example, based on results of the mining module. Query concepts may be saved and reused. Other embodiments are described and claimed.
    Type: Application
    Filed: November 22, 2006
    Publication date: May 22, 2008
    Inventors: Jean-Yves Bouguet, Carole Dulong, Igor V. Kozintsev, Yi Wu, Ara V. Nefian