Patents by Inventor Gabriel Blanco Saldana

Gabriel Blanco Saldana has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230326076
    Abstract: The description relates to cameras, and camera calibration for enhancing user experiences. One example can receive a first image of a user at a first location relative to a camera. The first image can include the user's upper body but does not include the user from head to toe. The example can receive a second image of the user at a second location relative to a camera. The second image can include the user's upper body but does not include the user from head to toe. The example can estimate a distance of the second location from the first location relative to the camera and calibrate a height and tilt angle of the camera from the first image, the second image, and the estimated distance and without a full body image of the user.
    Type: Application
    Filed: April 11, 2022
    Publication date: October 12, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Hongli DENG, Duong NGUYEN, Gabriel BLANCO SALDANA, Ryan S. MENEZES
  • Patent number: 11776160
    Abstract: Techniques for improved camera calibration are disclosed. An image is analyzed to identify a first set of key points for an object. A virtual object is generated. The virtual object has a second set of key points. A reprojected version of the second set is fitted to the first set in 2D space until a fitting threshold is satisfied. To do so, a 3D alignment of the second set is generated in an attempt to fit (e.g., in 2D space) the second set to the first set. Another operation includes reprojecting the second set into 2D space. In response to comparing the reprojected second set to the first set, another operation includes determining whether a fitting error between those sets satisfies the fitting threshold. A specific 3D alignment of the second set is selected. The camera is calibrated based on resulting reprojection parameters.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: October 3, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hongli Deng, Ryan Savio Menezes, Gabriel Blanco Saldana, Zicheng Liu
  • Publication number: 20230252783
    Abstract: Techniques for inferring whether an event is occurring in 3D space based on 2D image data and for maintaining a camera's calibration are disclosed. An image of an environment is accessed. Input is received, where the input includes a 2D rule imposed against a ground plane. The 2D rule includes conditions indicative of an event. A bounding box is generated and encompasses a detected object. A point within the bounding box is projected from a 2D-space image plane of the image into 3D space to generate a 3D-space point. Based on the 3D-space point, a 3D-space ground contact point is generated. That 3D-space ground contact point is reprojected onto the ground plane of the image to generate a synthesized 2D ground contact point. A location of the synthesized 2D ground contact point is determined to satisfy the conditions.
    Type: Application
    Filed: April 20, 2023
    Publication date: August 10, 2023
    Inventors: Hongli DENG, Joseph Milan FILCIK, Hao YAN, Tony Ducheng JIN, Gabriel BLANCO SALDANA, Ryan Savio MENEZES
  • Patent number: 11663822
    Abstract: Techniques for inferring whether an event is occurring in 3D space based on 2D image data and for maintaining a camera's calibration are disclosed. An image of an environment is accessed. Input is received, where the input includes a 2D rule imposed against a ground plane. The 2D rule includes conditions indicative of an event. A bounding box is generated and encompasses a detected object. A point within the bounding box is projected from a 2D-space image plane of the image into 3D space to generate a 3D-space point. Based on the 3D-space point, a 3D-space ground contact point is generated. That 3D-space ground contact point is reprojected onto the ground plane of the image to generate a synthesized 2D ground contact point. A location of the synthesized 2D ground contact point is determined to satisfy the conditions.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: May 30, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hongli Deng, Joseph Milan Filcik, Hao Yan, Tony Ducheng Jin, Gabriel Blanco Saldana, Ryan Savio Menezes
  • Publication number: 20230154224
    Abstract: A method to predict a traversal-time interval for traversal of a service queue comprises receiving video of a region including the service queue, recognizing in the video, via machine vision, a plurality of persons awaiting service within the region, estimating an average crossing-time interval between successive crossings, by the plurality of persons, of a fixed boundary along the service queue, wherein such estimating is based on features of the service queue and of the one or more persons awaiting service, and returning an estimate of the traversal-time interval based on a count of the persons awaiting service and on the average crossing-time interval as estimated.
    Type: Application
    Filed: November 12, 2021
    Publication date: May 18, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Chenyang LI, Hongli DENG, Gabriel BLANCO SALDANA, Joseph Milan FILCIK, Ryan Savio MENEZES
  • Publication number: 20230050504
    Abstract: Techniques for improved camera calibration are disclosed. An image is analyzed to identify a first set of key points for an object. A virtual object is generated. The virtual object has a second set of key points. A reprojected version of the second set is fitted to the first set in 2D space until a fitting threshold is satisfied. To do so, a 3D alignment of the second set is generated in an attempt to fit (e.g., in 2D space) the second set to the first set. Another operation includes reprojecting the second set into 2D space. In response to comparing the reprojected second set to the first set, another operation includes determining whether a fitting error between those sets satisfies the fitting threshold. A specific 3D alignment of the second set is selected. The camera is calibrated based on resulting reprojection parameters.
    Type: Application
    Filed: October 24, 2022
    Publication date: February 16, 2023
    Inventors: Hongli DENG, Ryan Savio MENEZES, Gabriel BLANCO SALDANA, Zicheng LIU
  • Patent number: 11488325
    Abstract: Techniques for improved camera calibration are disclosed. An image is analyzed to identify a first set of key points for an object. A virtual object is generated. The virtual object has a second set of key points. A reprojected version of the second set is fitted to the first set in 2D space until a fitting threshold is satisfied. To do so, a 3D alignment of the second set is generated in an attempt to fit (e.g., in 2D space) the second set to the first set. Another operation includes reprojecting the second set into 2D space. In response to comparing the reprojected second set to the first set, another operation includes determining whether a fitting error between those sets satisfies the fitting threshold. A specific 3D alignment of the second set is selected. The camera is calibrated based on resulting reprojection parameters.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: November 1, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hongli Deng, Ryan Savio Menezes, Gabriel Blanco Saldana, Zicheng Liu
  • Publication number: 20220335646
    Abstract: Improved techniques for determining an object's 3D orientation. An image is analyzed to identify a 2D object and a first set of key points. The first set defines a first polygon. A 3D virtual object is generated. This 3D virtual object has a second set of key points defining a second polygon representing an orientation of the 3D virtual object. The second polygon is rotated a selected number of times. For each rotation, each rotated polygon is reprojected into 2D space, and a matching score is determined between each reprojected polygon and the first polygon. A specific reprojected polygon is selected whose corresponding matching score is lowest. The orientation of the 3D virtual object is set to an orientation corresponding to the specific reprojected polygon. Based on the orientation of the 3D virtual object, an area of focus of the 2D object is determined.
    Type: Application
    Filed: May 9, 2022
    Publication date: October 20, 2022
    Inventors: Hongli DENG, Ryan Savio MENEZES, Gabriel BLANCO SALDANA, Zicheng LIU
  • Publication number: 20220164578
    Abstract: Techniques for inferring whether an event is occurring in 3D space based on 2D image data and for maintaining a camera's calibration are disclosed. An image of an environment is accessed. Input is received, where the input includes a 2D rule imposed against a ground plane. The 2D rule includes conditions indicative of an event. A bounding box is generated and encompasses a detected object. A point within the bounding box is projected from a 2D-space image plane of the image into 3D space to generate a 3D-space point. Based on the 3D-space point, a 3D-space ground contact point is generated. That 3D-space ground contact point is reprojected onto the ground plane of the image to generate a synthesized 2D ground contact point. A location of the synthesized 2D ground contact point is determined to satisfy the conditions.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Inventors: Hongli DENG, Joseph Milan FILCIK, Hao YAN, Tony Ducheng JIN, Gabriel BLANCO SALDANA, Ryan Savio MENEZES
  • Patent number: 11341674
    Abstract: Improved techniques for determining an object's 3D orientation. An image is analyzed to identify a 2D object and a first set of key points. The first set defines a first polygon. A 3D virtual object is generated. This 3D virtual object has a second set of key points defining a second polygon representing an orientation of the 3D virtual object. The second polygon is rotated a selected number of times. For each rotation, each rotated polygon is reprojected into 2D space, and a matching score is determined between each reprojected polygon and the first polygon. A specific reprojected polygon is selected whose corresponding matching score is lowest. The orientation of the 3D virtual object is set to an orientation corresponding to the specific reprojected polygon. Based on the orientation of the 3D virtual object, an area of focus of the 2D object is determined.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hongli Deng, Ryan Savio Menezes, Gabriel Blanco Saldana, Zicheng Liu
  • Publication number: 20210398318
    Abstract: Techniques for improved camera calibration are disclosed. An image is analyzed to identify a first set of key points for an object. A virtual object is generated. The virtual object has a second set of key points. A reprojected version of the second set is fitted to the first set in 2D space until a fitting threshold is satisfied. To do so, a 3D alignment of the second set is generated in an attempt to fit (e.g., in 2D space) the second set to the first set. Another operation includes reprojecting the second set into 2D space. In response to comparing the reprojected second set to the first set, another operation includes determining whether a fitting error between those sets satisfies the fitting threshold. A specific 3D alignment of the second set is selected. The camera is calibrated based on resulting reprojection parameters.
    Type: Application
    Filed: June 17, 2020
    Publication date: December 23, 2021
    Inventors: Hongli DENG, Ryan Savio MENEZES, Gabriel BLANCO SALDANA, Zicheng LIU
  • Publication number: 20210397871
    Abstract: Improved techniques for determining an object's 3D orientation. An image is analyzed to identify a 2D object and a first set of key points. The first set defines a first polygon. A 3D virtual object is generated. This 3D virtual object has a second set of key points defining a second polygon representing an orientation of the 3D virtual object. The second polygon is rotated a selected number of times. For each rotation, each rotated polygon is reprojected into 2D space, and a matching score is determined between each reprojected polygon and the first polygon. A specific reprojected polygon is selected whose corresponding matching score is lowest. The orientation of the 3D virtual object is set to an orientation corresponding to the specific reprojected polygon. Based on the orientation of the 3D virtual object, an area of focus of the 2D object is determined.
    Type: Application
    Filed: June 17, 2020
    Publication date: December 23, 2021
    Inventors: Hongli DENG, Ryan Savio MENEZES, Gabriel BLANCO SALDANA, Zicheng LIU
  • Patent number: 11037071
    Abstract: A machine learning engine may be used to identify items in a second item category that have a visual appearance similar to the visual appearance of a first item selected from a first item category. Image data and text data associated with a large number of items from different item categories may be processed and used by an association model created by a machine learning engine. The association model may extract item attributes from the image data and text data of the first item. The machine learning engine may determine weights for parameter types, and the weights may calibrate the influence of the respective parameter types on the search results. The association model may be deployed to identify items from different item categories that have a visual appearance similar to the first item. The association model may be updated over time by the machine learning engine as data correlations evolve.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: June 15, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Karolina Tekiela, Gabriel Blanco Saldana, Rui Luo