Patents by Inventor Ofer Meidan

Ofer Meidan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12367673
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: July 22, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 12073571
    Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: August 27, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 11393207
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: July 19, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 11315262
    Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: April 26, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 10733450
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: August 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 10699421
    Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: June 30, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
  • Patent number: 10223591
    Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: March 5, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar