Patents by Inventor Jerome Berclaz

Jerome Berclaz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087365
    Abstract: A multisensor processing platform includes, in at least some embodiments, a face detector and embedding network for analyzing unstructured data to detect, identify and track any combination of objects (including people) or activities through computer vision algorithms and machine learning. In some embodiments, the unstructured data is compressed by identifying the appearance of an object across a series of frames of the data, aggregating those appearances and effectively summarizing those appearances of the object by a single representative image displayed to a user for each set of aggregated appearances to enable the user to assess the summarized data substantially at a glance. The data can be filtered into tracklets, groups and clusters, based on system confidence in the identification of the object or activity, to provide multiple levels of granularity.
    Type: Application
    Filed: January 19, 2021
    Publication date: March 14, 2024
    Applicant: PERCIPIENT.AI INC.
    Inventors: Timo PYLVAENAEINEN, Craig SENNABAUM, Mike HIGUERA, Ivan KOVTUN, Atul KANAUJIA, Alison HIGUERA, Jerome BERCLAZ, Rajendra SHAH, Balan AYYAR, Vasudev PARAMESWARAN
  • Publication number: 20230334291
    Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.
    Type: Application
    Filed: April 24, 2023
    Publication date: October 19, 2023
    Applicant: PERCIPIENT .AI INC.
    Inventors: Vasudev Parameswaran, Atui KANAUJIA, Simon CHEN, Jerome BERCLAZ, Ivan KOVTUN, Alison HIGUERA, Vidyadayini TALAPADY, Derek YOUNG, Balan AYYAR, Rajendra SHAH, Timo PYLVANAINEN
  • Patent number: 11636312
    Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: April 25, 2023
    Assignee: PERCIPIENT.AI INC.
    Inventors: Vasudev Parameswaran, Atul Kanaujia, Simon Chen, Jerome Berclaz, Ivan Kovtun, Alison Higuera, Vidyadayini Talapady, Derek Young, Balan Ayyar, Rajendra Shah, Timo Pylvanainen
  • Publication number: 20230023164
    Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.
    Type: Application
    Filed: October 4, 2022
    Publication date: January 26, 2023
    Applicant: PERCIPIENT.AI INC.
    Inventors: Vasudev PARAMESWARAN, Atul KANAUJIA, Simon CHEN, Jerome BERCLAZ, Ivan KOVTUN, Alison HIGUERA, Vidyadayini TALAPADY, Derek YOUNG, Balan AYYAR, Rajendra SHAH, Timo PYLVANAINEN
  • Patent number: 10495760
    Abstract: A location detection system performs a process that uses data from a satellite navigation sensor in conjunction with data from a second sensor to determine the accuracy of location estimates provided by the satellite navigation sensor. The system uses the satellite navigation sensor to determine location estimates from at a first time and a second time. The system also uses data from the second sensor to determine a third location estimate, which represents another estimate of the system's location at the second time. The system uses the three location estimates to determine whether the second location estimate satisfies an accuracy condition. If the accuracy condition is satisfied, then the second location estimate may be provided as input to a process.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: December 3, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: Jerome Berclaz, Vasudev Parameswaran
  • Publication number: 20180276875
    Abstract: Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.
    Type: Application
    Filed: May 25, 2018
    Publication date: September 27, 2018
    Inventors: Timo Pylvaenaeinen, Thommen Korah, Jerome Berclaz, Myra Nam
  • Publication number: 20180156923
    Abstract: A location detection system performs a process that uses data from a satellite navigation sensor in conjunction with data from a second sensor to determine the accuracy of location estimates provided by the satellite navigation sensor. The system uses the satellite navigation sensor to determine location estimates from at a first time and a second time. The system also uses data from the second sensor to determine a third location estimate, which represents another estimate of the system's location at the second time. The system uses the three location estimates to determine whether the second location estimate satisfies an accuracy condition. If the accuracy condition is satisfied, then the second location estimate may be provided as input to a process.
    Type: Application
    Filed: April 1, 2017
    Publication date: June 7, 2018
    Inventors: Jerome Berclaz, Vasudev Parameswaran
  • Patent number: 9984494
    Abstract: Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.
    Type: Grant
    Filed: January 26, 2015
    Date of Patent: May 29, 2018
    Assignee: UBER TECHNOLOGIES, INC.
    Inventors: Timo Pylvaenaeinen, Thommen Korah, Jerome Berclaz, Myra Nam
  • Publication number: 20160217611
    Abstract: Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.
    Type: Application
    Filed: January 26, 2015
    Publication date: July 28, 2016
    Inventors: Timo Pylvaenaeinen, Thommen Korah, Jerome Berclaz, Myra Nam
  • Patent number: 8615107
    Abstract: Trajectories of objects are estimated by determining the optimal solution(s) of a tracking model on the basis of an occupancy probability distribution. The occupancy probability distribution is the probability of presence of objects over a set of discrete points in the spatio-temporal space at a number of time steps. The tracking model is defined by the set of discrete points, a virtual source location and a virtual sink location, wherein objects in the tracking model are creatable in the virtual source location and are removable in the virtual sink location.
    Type: Grant
    Filed: January 11, 2012
    Date of Patent: December 24, 2013
    Assignee: Ecole Polytechnique Federale de Lausanne (EPFL)
    Inventors: François Fleuret, Jérôme Berclaz, Engin Türetken, Pascal Fua
  • Publication number: 20130177200
    Abstract: Trajectories of objects are estimated by determining the optimal solution(s) of a tracking model on the basis of an occupancy probability distribution. The occupancy probability distribution is the probability of presence of objects over a set of discrete points in the spatio-temporal space at a number of time steps. The tracking model is defined by the set of discrete points, a virtual source location and a virtual sink location, wherein objects in the tracking model are creatable in the virtual source location and are removable in the virtual sink location.
    Type: Application
    Filed: January 11, 2012
    Publication date: July 11, 2013
    Inventors: Françoi FLEURET, Jérôme Berclaz, Engin Türetken, Pascal Fua