Patents by Inventor Ehud Benyamin Rivlin
Ehud Benyamin Rivlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961303Abstract: Described is a multiple-camera system and process for detecting, tracking, and re-verifying agents within a materials handling facility. In one implementation, a plurality of feature vectors may be generated for an agent and maintained as an agent model representative of the agent. When the object being tracked as the agent is to be re-verified, feature vectors representative of the object are generated and stored as a probe agent model. Feature vectors of the probe agent model are compared with corresponding feature vectors of candidate agent models for agents located in the materials handling facility. Based on the similarity scores, the agent may be re-verified, it may be determined that identifiers used for objects tracked as representative of the agents have been flipped, and/or to determine that tracking of the object representing the agent has been dropped.Type: GrantFiled: May 6, 2022Date of Patent: April 16, 2024Assignee: Amazon Technologies, Inc.Inventors: Eli Osherovich, Ehud Benyamin Rivlin, Yacov Hel-Or, Dmitri Veikherman, Dilip Kumar, Gerard Guy Medioni, George Leifman
-
Patent number: 11918178Abstract: The present disclosure is directed towards systems and methods that leverage machine-learned models to decrease the rate at which abnormal sites are missed during a gastroenterological procedure. In particular, the system and methods of the present disclosure can use machine-learning techniques to determine the coverage rate achieved during a gastroenterological procedure. Measuring the coverage rate of the gastroenterological procedure can allow medical professionals to be alerted when the coverage output is deficient and thus allow an additional coverage to be achieved and as a result increase in the detection rate for abnormal sites (e.g., adenoma, polyp, lesion, tumor, etc.) during the gastroenterological procedure.Type: GrantFiled: February 26, 2021Date of Patent: March 5, 2024Assignee: Verily Life Sciences LLCInventors: Daniel Freedman, Yacob Yochai Blau, Liran Katzir, Amit Aides, Ilan Moshe Shimshoni, Ehud Benyamin Rivlin, Yossi Matias
-
Publication number: 20220254017Abstract: The present disclosure provides systems and methods for improving detection and location determination accuracy of abnormalities during a gastroenterological procedure. One example method includes obtaining a video data stream generated by an endoscopic device during a gastroenterological procedure for a patient. The method includes generating a three-dimensional model of at least a portion of an anatomical structure viewed by the endoscopic device based at least in part on the video data stream. The method includes obtaining location data associated with one or more detected abnormalities based on localization data generated from the video data stream of the endoscopic device. The method includes generating a visual presentation of the three-dimensional model and the location data associated with the one or more detected abnormalities; and providing the visual presentation of the three-dimensional model and the location data associated with the detected abnormality for use in diagnosis of the patient.Type: ApplicationFiled: May 22, 2020Publication date: August 11, 2022Inventors: Ehud Benyamin Rivlin, Yossi Matias
-
Patent number: 11393207Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: August 3, 2020Date of Patent: July 19, 2022Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 11354885Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.Type: GrantFiled: June 29, 2020Date of Patent: June 7, 2022Assignee: Amazon Technologies, Inc.Inventors: Amit Adam, Igor Kviatkovsky, Ehud Benyamin Rivlin, Gerard Guy Medioni
-
Patent number: 11328513Abstract: Described is a multiple-camera system and process for detecting, tracking, and re-verifying agents within a materials handling facility. In one implementation, a plurality of feature vectors may be generated for an agent and maintained as an agent model representative of the agent. When the object being tracked as the agent is to be re-verified, feature vectors representative of the object are generated and stored as a probe agent model. Feature vectors of the probe agent model are compared with corresponding feature vectors of candidate agent models for agents located in the materials handling facility. Based on the similarity scores, the agent may be re-verified, it may be determined that identifiers used for objects tracked as representative of the agents have been flipped, and/or to determine that tracking of the object representing the agent has been dropped.Type: GrantFiled: November 7, 2017Date of Patent: May 10, 2022Assignee: Amazon Technologies, Inc.Inventors: Eli Osherovich, Ehud Benyamin Rivlin, Yacov Hel-Or, Dmitri Veikherman, Dilip Kumar, Gerard Guy Medioni, George Leifman
-
Patent number: 11315262Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.Type: GrantFiled: June 23, 2020Date of Patent: April 26, 2022Assignee: Amazon Technologies, Inc.Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Publication number: 20210280312Abstract: The present disclosure is directed towards systems and methods that leverage machine-learned models to decrease the rate at which abnormal sites are missed during a gastroenterological procedure. In particular, the system and methods of the present disclosure can use machine-learning techniques to determine the coverage rate achieved during a gastroenterological procedure. Measuring the coverage rate of the gastroenterological procedure can allow medical professionals to be alerted when the coverage output is deficient and thus allow an additional coverage to be achieved and as a result increase in the detection rate for abnormal sites (e.g., adenoma, polyp, lesion, tumor, etc.) during the gastroenterological procedure.Type: ApplicationFiled: February 26, 2021Publication date: September 9, 2021Inventors: Daniel Freedman, Yacob Yochai Blau, Liran Katzir, Amit Aides, Ilan Moshe Shimshoni, Ehud Benyamin Rivlin, Yossi Matias
-
Patent number: 10733450Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: March 4, 2019Date of Patent: August 4, 2020Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 10699152Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.Type: GrantFiled: December 13, 2017Date of Patent: June 30, 2020Assignee: Amazon Technologies, Inc.Inventors: Amit Adam, Igor Kviatkovsky, Ehud Benyamin Rivlin, Gerard Guy Medioni
-
Patent number: 10699421Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.Type: GrantFiled: March 29, 2017Date of Patent: June 30, 2020Assignee: Amazon Technologies, Inc.Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 10664962Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.Type: GrantFiled: December 13, 2017Date of Patent: May 26, 2020Assignee: Amazon Technologies, Inc.Inventors: Amit Adam, Igor Kviatkovsky, Ehud Benyamin Rivlin, Gerard Guy Medioni
-
Patent number: 10482625Abstract: Described is a multiple-imaging device system and process for calibrating each of the imaging devices to a global color space so that image pixel values representative of an imaged object are the same or similar regardless of the imaging device that produced the image data or the lighting conditions surrounding the object.Type: GrantFiled: March 28, 2017Date of Patent: November 19, 2019Assignee: Amazon Technologies, Inc.Inventors: Amit Adam, Ehud Benyamin Rivlin, Gerard Guy Medioni
-
Patent number: 10223591Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: March 30, 2017Date of Patent: March 5, 2019Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar