Patents by Inventor Benjamin James Kadlec

Benjamin James Kadlec has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11720755
    Abstract: Systems and methods are provided for generating sets of candidates comprising images and places within a threshold geographic proximity based on geographic information associated with each of the plurality of images and geographic information associated with each place. For each set of candidates, the systems and methods generate a similarity score based on a similarity between text extracted from each image and a place name, and the geographic information associated with each image and each place. For each place with an associated image as a potential match, the systems and methods generate a name similarity score based on matching the extracted text of the image to the place name, and store an image as place data associated with a place based on determining that the name similarity score for the extracted text associated with the image is higher than a second predetermined threshold.
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: August 8, 2023
    Assignee: Uber Technologies, Inc.
    Inventors: Jeremy Hintz, Lionel Gueguen, Kapil Gupta, Benjamin James Kadlec, Susmit Biswas
  • Publication number: 20220027667
    Abstract: Systems and methods are provided for generating sets of candidates comprising images and places within a threshold geographic proximity based on geographic information associated with each of the plurality of images and geographic information associated with each place. For each set of candidates, the systems and methods generate a similarity score based on a similarity between text extracted from each image and a place name, and the geographic information associated with each image and each place. For each place with an associated image as a potential match, the systems and methods generate a name similarity score based on matching the extracted text of the image to the place name, and store an image as place data associated with a place based on determining that the name similarity score for the extracted text associated with the image is higher than a second predetermined threshold.
    Type: Application
    Filed: October 5, 2021
    Publication date: January 27, 2022
    Inventors: Jeremy Hintz, Lionel Gueguen, Kapil Gupta, Benjamin James Kadlec, Susmit Biswas
  • Patent number: 11164038
    Abstract: Systems and methods are provided for generating sets of candidates comprising images and places within a threshold geographic proximity based on geographic information associated with each of the plurality of images and geographic information associated with each place. For each set of candidates, the systems and methods generate a similarity score based on a similarity between text extracted from each image and a place name, and the geographic information associated with each image and each place. For each place with an associated image as a potential match, the systems and methods generate a name similarity score based on matching the extracted text of the image to the place name, and store an image as place data associated with a place based on determining that the name similarity score for the extracted text associated with the image is higher than a second predetermined threshold.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: November 2, 2021
    Assignee: Uber Technologies, Inc.
    Inventors: Jeremy Hintz, Lionel Gueguen, Kapil Gupta, Benjamin James Kadlec, Susmit Biswas
  • Publication number: 20200057914
    Abstract: Systems and methods are provided for generating sets of candidates comprising images and places within a threshold geographic proximity based on geographic information associated with each of the plurality of images and geographic information associated with each place. For each set of candidates, the systems and methods generate a similarity score based on a similarity between text extracted from each image and a place name, and the geographic information associated with each image and each place. For each place with an associated image as a potential match, the systems and methods generate a name similarity score based on matching the extracted text of the image to the place name, and store an image as place data associated with a place based on determining that the name similarity score for the extracted text associated with the image is higher than a second predetermined threshold.
    Type: Application
    Filed: August 9, 2019
    Publication date: February 20, 2020
    Inventors: Jeremy Hintz, Lionel Gueguen, Kapil Gupta, Benjamin James Kadlec, Susmit Biswas
  • Publication number: 20200058158
    Abstract: Example systems and methods improve a location detection process. A system accesses image data and image metadata, whereby the image data captures images of a plurality of objects from different views, each image having corresponding image metadata. The system then detects each object in the plurality of objects in the image data. A plurality of rays in three-dimensional space is generated, whereby each ray of the plurality of rays is generated based on the detected objects and the corresponding image metadata. The system predicts object locations using the generated rays based on a probabilistic triangulation of the rays. The networked system updates map data using the predicted object locations. The updating includes adding objects at their predicted object locations to the map data. The map data is used to generate a map.
    Type: Application
    Filed: August 9, 2019
    Publication date: February 20, 2020
    Inventors: Fritz Obermeyer, Jonathan Chen, Vladimir Lyapunov, Lionel Gueguen, Noah Goodman, Benjamin James Kadlec, Douglas Bemis
  • Patent number: 9905032
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: February 27, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec
  • Publication number: 20170098323
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Application
    Filed: December 19, 2016
    Publication date: April 6, 2017
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec
  • Patent number: 9523772
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: December 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec
  • Publication number: 20150362587
    Abstract: Lidar scanning is used in a variety of scenarios to detect the locations, sizes, shapes, and/or orientations of a variety of objects. The accuracy of such scanning techniques is dependent upon the calibration of the orientation of the lidar sensor, because small discrepancies between a presumed orientation and an actual orientation may result in significant differences in the detected properties of various objects. Such errors are often avoided by calibrating the lidar sensor before use for scanning, and/or registering the lidar data set, but lidar sensors in the field may still become miscalibrated and may generate inaccurate data. Presented herein are techniques for identifying, verifying, and/or correcting for lidar calibration by projecting a lidar pattern on a surface of the environment, and detecting changes in detected geometry from one or more locations. Comparing detected angles with predicted angles according to a predicted calibration enables the detection of calibration differences.
    Type: Application
    Filed: June 17, 2014
    Publication date: December 17, 2015
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec, Michael Riley Harrell
  • Publication number: 20140368493
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Application
    Filed: June 14, 2013
    Publication date: December 18, 2014
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec