Patents by Inventor Ehud Benyamin Rivlin

Ehud Benyamin Rivlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961303
    Abstract: Described is a multiple-camera system and process for detecting, tracking, and re-verifying agents within a materials handling facility. In one implementation, a plurality of feature vectors may be generated for an agent and maintained as an agent model representative of the agent. When the object being tracked as the agent is to be re-verified, feature vectors representative of the object are generated and stored as a probe agent model. Feature vectors of the probe agent model are compared with corresponding feature vectors of candidate agent models for agents located in the materials handling facility. Based on the similarity scores, the agent may be re-verified, it may be determined that identifiers used for objects tracked as representative of the agents have been flipped, and/or to determine that tracking of the object representing the agent has been dropped.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: April 16, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Eli Osherovich, Ehud Benyamin Rivlin, Yacov Hel-Or, Dmitri Veikherman, Dilip Kumar, Gerard Guy Medioni, George Leifman
  • Patent number: 11918178
    Abstract: The present disclosure is directed towards systems and methods that leverage machine-learned models to decrease the rate at which abnormal sites are missed during a gastroenterological procedure. In particular, the system and methods of the present disclosure can use machine-learning techniques to determine the coverage rate achieved during a gastroenterological procedure. Measuring the coverage rate of the gastroenterological procedure can allow medical professionals to be alerted when the coverage output is deficient and thus allow an additional coverage to be achieved and as a result increase in the detection rate for abnormal sites (e.g., adenoma, polyp, lesion, tumor, etc.) during the gastroenterological procedure.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: March 5, 2024
    Assignee: Verily Life Sciences LLC
    Inventors: Daniel Freedman, Yacob Yochai Blau, Liran Katzir, Amit Aides, Ilan Moshe Shimshoni, Ehud Benyamin Rivlin, Yossi Matias
  • Publication number: 20220254017
    Abstract: The present disclosure provides systems and methods for improving detection and location determination accuracy of abnormalities during a gastroenterological procedure. One example method includes obtaining a video data stream generated by an endoscopic device during a gastroenterological procedure for a patient. The method includes generating a three-dimensional model of at least a portion of an anatomical structure viewed by the endoscopic device based at least in part on the video data stream. The method includes obtaining location data associated with one or more detected abnormalities based on localization data generated from the video data stream of the endoscopic device. The method includes generating a visual presentation of the three-dimensional model and the location data associated with the one or more detected abnormalities; and providing the visual presentation of the three-dimensional model and the location data associated with the detected abnormality for use in diagnosis of the patient.
    Type: Application
    Filed: May 22, 2020
    Publication date: August 11, 2022
    Inventors: Ehud Benyamin Rivlin, Yossi Matias
  • Patent number: 11393207
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: July 19, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 11354885
    Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Amit Adam, Igor Kviatkovsky, Ehud Benyamin Rivlin, Gerard Guy Medioni
  • Patent number: 11328513
    Abstract: Described is a multiple-camera system and process for detecting, tracking, and re-verifying agents within a materials handling facility. In one implementation, a plurality of feature vectors may be generated for an agent and maintained as an agent model representative of the agent. When the object being tracked as the agent is to be re-verified, feature vectors representative of the object are generated and stored as a probe agent model. Feature vectors of the probe agent model are compared with corresponding feature vectors of candidate agent models for agents located in the materials handling facility. Based on the similarity scores, the agent may be re-verified, it may be determined that identifiers used for objects tracked as representative of the agents have been flipped, and/or to determine that tracking of the object representing the agent has been dropped.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: May 10, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Eli Osherovich, Ehud Benyamin Rivlin, Yacov Hel-Or, Dmitri Veikherman, Dilip Kumar, Gerard Guy Medioni, George Leifman
  • Patent number: 11315262
    Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: April 26, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Publication number: 20210280312
    Abstract: The present disclosure is directed towards systems and methods that leverage machine-learned models to decrease the rate at which abnormal sites are missed during a gastroenterological procedure. In particular, the system and methods of the present disclosure can use machine-learning techniques to determine the coverage rate achieved during a gastroenterological procedure. Measuring the coverage rate of the gastroenterological procedure can allow medical professionals to be alerted when the coverage output is deficient and thus allow an additional coverage to be achieved and as a result increase in the detection rate for abnormal sites (e.g., adenoma, polyp, lesion, tumor, etc.) during the gastroenterological procedure.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 9, 2021
    Inventors: Daniel Freedman, Yacob Yochai Blau, Liran Katzir, Amit Aides, Ilan Moshe Shimshoni, Ehud Benyamin Rivlin, Yossi Matias
  • Patent number: 10733450
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: August 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 10699152
    Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: June 30, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Amit Adam, Igor Kviatkovsky, Ehud Benyamin Rivlin, Gerard Guy Medioni
  • Patent number: 10699421
    Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: June 30, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 10664962
    Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: May 26, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Amit Adam, Igor Kviatkovsky, Ehud Benyamin Rivlin, Gerard Guy Medioni
  • Patent number: 10482625
    Abstract: Described is a multiple-imaging device system and process for calibrating each of the imaging devices to a global color space so that image pixel values representative of an imaged object are the same or similar regardless of the imaging device that produced the image data or the lighting conditions surrounding the object.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: November 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Amit Adam, Ehud Benyamin Rivlin, Gerard Guy Medioni
  • Patent number: 10223591
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: March 5, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar