Patents by Inventor Jerome Berclaz
Jerome Berclaz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240087365Abstract: A multisensor processing platform includes, in at least some embodiments, a face detector and embedding network for analyzing unstructured data to detect, identify and track any combination of objects (including people) or activities through computer vision algorithms and machine learning. In some embodiments, the unstructured data is compressed by identifying the appearance of an object across a series of frames of the data, aggregating those appearances and effectively summarizing those appearances of the object by a single representative image displayed to a user for each set of aggregated appearances to enable the user to assess the summarized data substantially at a glance. The data can be filtered into tracklets, groups and clusters, based on system confidence in the identification of the object or activity, to provide multiple levels of granularity.Type: ApplicationFiled: January 19, 2021Publication date: March 14, 2024Applicant: PERCIPIENT.AI INC.Inventors: Timo PYLVAENAEINEN, Craig SENNABAUM, Mike HIGUERA, Ivan KOVTUN, Atul KANAUJIA, Alison HIGUERA, Jerome BERCLAZ, Rajendra SHAH, Balan AYYAR, Vasudev PARAMESWARAN
-
Publication number: 20230334291Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.Type: ApplicationFiled: April 24, 2023Publication date: October 19, 2023Applicant: PERCIPIENT .AI INC.Inventors: Vasudev Parameswaran, Atui KANAUJIA, Simon CHEN, Jerome BERCLAZ, Ivan KOVTUN, Alison HIGUERA, Vidyadayini TALAPADY, Derek YOUNG, Balan AYYAR, Rajendra SHAH, Timo PYLVANAINEN
-
Patent number: 11636312Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.Type: GrantFiled: October 4, 2022Date of Patent: April 25, 2023Assignee: PERCIPIENT.AI INC.Inventors: Vasudev Parameswaran, Atul Kanaujia, Simon Chen, Jerome Berclaz, Ivan Kovtun, Alison Higuera, Vidyadayini Talapady, Derek Young, Balan Ayyar, Rajendra Shah, Timo Pylvanainen
-
Publication number: 20230023164Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.Type: ApplicationFiled: October 4, 2022Publication date: January 26, 2023Applicant: PERCIPIENT.AI INC.Inventors: Vasudev PARAMESWARAN, Atul KANAUJIA, Simon CHEN, Jerome BERCLAZ, Ivan KOVTUN, Alison HIGUERA, Vidyadayini TALAPADY, Derek YOUNG, Balan AYYAR, Rajendra SHAH, Timo PYLVANAINEN
-
Patent number: 10495760Abstract: A location detection system performs a process that uses data from a satellite navigation sensor in conjunction with data from a second sensor to determine the accuracy of location estimates provided by the satellite navigation sensor. The system uses the satellite navigation sensor to determine location estimates from at a first time and a second time. The system also uses data from the second sensor to determine a third location estimate, which represents another estimate of the system's location at the second time. The system uses the three location estimates to determine whether the second location estimate satisfies an accuracy condition. If the accuracy condition is satisfied, then the second location estimate may be provided as input to a process.Type: GrantFiled: April 1, 2017Date of Patent: December 3, 2019Assignee: Uber Technologies, Inc.Inventors: Jerome Berclaz, Vasudev Parameswaran
-
Publication number: 20180276875Abstract: Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.Type: ApplicationFiled: May 25, 2018Publication date: September 27, 2018Inventors: Timo Pylvaenaeinen, Thommen Korah, Jerome Berclaz, Myra Nam
-
Publication number: 20180156923Abstract: A location detection system performs a process that uses data from a satellite navigation sensor in conjunction with data from a second sensor to determine the accuracy of location estimates provided by the satellite navigation sensor. The system uses the satellite navigation sensor to determine location estimates from at a first time and a second time. The system also uses data from the second sensor to determine a third location estimate, which represents another estimate of the system's location at the second time. The system uses the three location estimates to determine whether the second location estimate satisfies an accuracy condition. If the accuracy condition is satisfied, then the second location estimate may be provided as input to a process.Type: ApplicationFiled: April 1, 2017Publication date: June 7, 2018Inventors: Jerome Berclaz, Vasudev Parameswaran
-
Patent number: 9984494Abstract: Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.Type: GrantFiled: January 26, 2015Date of Patent: May 29, 2018Assignee: UBER TECHNOLOGIES, INC.Inventors: Timo Pylvaenaeinen, Thommen Korah, Jerome Berclaz, Myra Nam
-
Publication number: 20160217611Abstract: Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.Type: ApplicationFiled: January 26, 2015Publication date: July 28, 2016Inventors: Timo Pylvaenaeinen, Thommen Korah, Jerome Berclaz, Myra Nam
-
Patent number: 8615107Abstract: Trajectories of objects are estimated by determining the optimal solution(s) of a tracking model on the basis of an occupancy probability distribution. The occupancy probability distribution is the probability of presence of objects over a set of discrete points in the spatio-temporal space at a number of time steps. The tracking model is defined by the set of discrete points, a virtual source location and a virtual sink location, wherein objects in the tracking model are creatable in the virtual source location and are removable in the virtual sink location.Type: GrantFiled: January 11, 2012Date of Patent: December 24, 2013Assignee: Ecole Polytechnique Federale de Lausanne (EPFL)Inventors: François Fleuret, Jérôme Berclaz, Engin Türetken, Pascal Fua
-
Publication number: 20130177200Abstract: Trajectories of objects are estimated by determining the optimal solution(s) of a tracking model on the basis of an occupancy probability distribution. The occupancy probability distribution is the probability of presence of objects over a set of discrete points in the spatio-temporal space at a number of time steps. The tracking model is defined by the set of discrete points, a virtual source location and a virtual sink location, wherein objects in the tracking model are creatable in the virtual source location and are removable in the virtual sink location.Type: ApplicationFiled: January 11, 2012Publication date: July 11, 2013Inventors: Françoi FLEURET, Jérôme Berclaz, Engin Türetken, Pascal Fua