Patents by Inventor Roman Goldenberg
Roman Goldenberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12367673Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: July 18, 2022Date of Patent: July 22, 2025Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 12230052Abstract: Images of a hand are obtained by a camera. A pose of the hand relative to the camera may vary due to rotation, translation, articulation of joints in the hand, and so forth. Avatars comprising texture maps from images of actual hands and three-dimensional models that describe the shape of those hands are manipulated into different poses and articulations to produce synthetic images. Given that the mapping of points on an avatar to the synthetic image is known, highly accurate annotation data is produced that relates particular points on the avatar to the synthetic image. An artificial neural network (ANN) is trained using the synthetic images and corresponding annotation data. The trained ANN processes a first image of a hand to produce a second image of the hand that appears to be in a standardized or canonical pose. The second image may then be processed to identify the user.Type: GrantFiled: December 12, 2019Date of Patent: February 18, 2025Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Igor Kviatkovsky, Nadav Israel Bhonker, Yevgeni Nogin, Roman Goldenberg, Manoj Aggarwal, Gerard Guy Medioni
-
Patent number: 12213654Abstract: Embodiments of a system, a machine-accessible storage medium, and a computer-implemented method are described in which operations are performed. The operations comprising receiving a plurality of image frames associated with a video of an endoscopy procedure, generating a probability estimate for one or more image frames included in the plurality of image frames, and identifying a transition in the video when the endoscopy procedure transitions from a first phase to a second phase based, at least in part, on the probability estimate for the one or more image frames. The probability estimate includes a first probability that one or more image frames are associated with a first phase of the endoscopy procedure.Type: GrantFiled: December 28, 2021Date of Patent: February 4, 2025Assignee: Verily Life Sciences LLCInventors: Daniel Freedman, Ehud Rivlin, Valentin Dashinsky, Roman Goldenberg, Liran Katzir, Dmitri Veikherman
-
Patent number: 12073571Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.Type: GrantFiled: April 22, 2022Date of Patent: August 27, 2024Assignee: Amazon Technologies, Inc.Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Publication number: 20240257497Abstract: Methods, systems, and devices for classifying a target feature in a medical video are presented herein. Some methods may include the steps of: receiving a plurality of frames of the medical video, where the plurality of frames include the target feature; generating, by a first pretrained machine learning model, an embedding vector for each frame of the plurality of frames, each embedding vector having a predetermined number of values; and generating, by a second pretrained machine learning model, a classification of the target feature using the plurality of embedding vectors, where the second pretrained machine learning model analyzes the plurality of embedding vectors jointly.Type: ApplicationFiled: January 26, 2024Publication date: August 1, 2024Inventors: Roman Goldenberg, Ehud Rivlin, Amir Livne, Israel Or Weinstein
-
Patent number: 11957302Abstract: A user-interface for visualizing a colonoscopy procedure includes a video region and a navigational map upon which coverage annotations are displayed. A live video feed received from a colonoscope is displayed in the video region. The navigational map depicts longitudinal sections of a colon. The coverage annotations are presented on the navigation map and indicate whether one or more of the longitudinal sections is deemed adequately inspected or inadequately inspected during the colonoscopy procedure.Type: GrantFiled: October 12, 2021Date of Patent: April 16, 2024Assignee: Verily Life Sciences LLCInventors: Erik Lack, Roman Goldenberg, Daniel Freedman, Ehud Rivlin
-
Patent number: 11832787Abstract: A user-interface for aiding navigation of an endoscope through a lumen of a tubular anatomical structure during an endoscopy procedure includes a video region in which a live video feed received from the endoscope is displayed and an observation location map. The observation location map depicts a point of observation from which the live video feed is acquired within the lumen relative to a cross-sectional depiction of the lumen as the endoscope longitudinally traverses the tubular anatomical structure within the lumen during the endoscopy procedure.Type: GrantFiled: December 27, 2021Date of Patent: December 5, 2023Assignee: Verily Life Sciences LLCInventors: Daniel Freedman, Yacob Yochai Blau, Dmitri Veikherman, Roman Goldenberg, Ehud Rivlin
-
Patent number: 11636286Abstract: Described are systems and methods for training machine learning models of an ensemble of models that are de-correlated. For example, two or more machine learning models may be concurrently trained (e.g., co-trained) while adding a decorrelation component to one or both models that decreases the pairwise correlation between the outputs of the models. Unlike traditional approaches, in accordance with the disclosed implementations, only the negative results need to be decorrelated.Type: GrantFiled: May 1, 2020Date of Patent: April 25, 2023Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Miriam Farber, George Leifman, Gerard Guy Medioni
-
Patent number: 11526693Abstract: Disclosed are systems and method for training an ensemble of machine learning models with a focus on feature engineering. For example, the training of the models encourages each machine learning model of the ensemble to rely on a different set of input features from the training data samples used to train the machine learning models of the ensemble. However, instead of telling each model explicitly which features to learn, in accordance with the disclosed implementations, ML models of the ensemble may be trained sequentially, with each new model trained to disregard input features learned by previously trained ML models of the ensemble and learn based on other features included in the training data samples.Type: GrantFiled: May 1, 2020Date of Patent: December 13, 2022Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Miriam Farber, George Leifman, Gerard Guy Medioni
-
Publication number: 20220369899Abstract: A user-interface for visualizing a colonoscopy procedure includes a video region and a navigational map upon which coverage annotations are displayed. A live video feed received from a colonoscope is displayed in the video region. The navigational map depicts longitudinal sections of a colon. The coverage annotations are presented on the navigation map and indicate whether one or more of the longitudinal sections is deemed adequately inspected or inadequately inspected during the colonoscopy procedure.Type: ApplicationFiled: October 12, 2021Publication date: November 24, 2022Inventors: Erik Lack, Roman Goldenberg, Daniel Freedman, Ehud Rivlin
-
Publication number: 20220369895Abstract: A user-interface for aiding navigation of an endoscope through a lumen of a tubular anatomical structure during an endoscopy procedure includes a video region in which a live video feed received from the endoscope is displayed and an observation location map. The observation location map depicts a point of observation from which the live video feed is acquired within the lumen relative to a cross-sectional depiction of the lumen as the endoscope longitudinally traverses the tubular anatomical structure within the lumen during the endoscopy procedure.Type: ApplicationFiled: December 27, 2021Publication date: November 24, 2022Inventors: Daniel Freedman, Yacob Yochai Blau, Dmitri Veikherman, Roman Goldenberg, Ehud Rivlin
-
Publication number: 20220369920Abstract: Embodiments of a system, a machine-accessible storage medium, and a computer-implemented method are described in which operations are performed. The operations comprising receiving a plurality of image frames associated with a video of an endoscopy procedure, generating a probability estimate for one or more image frames included in the plurality of image frames, and identifying a transition in the video when the endoscopy procedure transitions from a first phase to a second phase based, at least in part, on the probability estimate for the one or more image frames. The probability estimate includes a first probability that one or more image frames are associated with a first phase of the endoscopy procedure.Type: ApplicationFiled: December 28, 2021Publication date: November 24, 2022Inventors: Daniel Freedman, Ehud Rivlin, Valentin Dashinsky, Roman Goldenberg, Liran Katzir, Dmitri Veikherman
-
Patent number: 11393207Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: August 3, 2020Date of Patent: July 19, 2022Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 11315262Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.Type: GrantFiled: June 23, 2020Date of Patent: April 26, 2022Assignee: Amazon Technologies, Inc.Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 10733450Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: March 4, 2019Date of Patent: August 4, 2020Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 10699421Abstract: The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames.Type: GrantFiled: March 29, 2017Date of Patent: June 30, 2020Assignee: Amazon Technologies, Inc.Inventors: Boris Cherevatsky, Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 10534965Abstract: Techniques for analyzing stored video upon a request are described. For example, a method of receiving a first application programming interface (API) request to analyze a stored video, the API request to include a location of the stored video and at least one analysis action to perform on the stored video; accessing the location of the stored video to retrieve the stored video; segmenting the accessed video into chunks; processing each chunk with a chunk processor to perform the at least one analysis action, each chunk processor to utilize at least one machine learning model in performing the at least one analysis action; joining the results of the processing of each chunk to generate a final result; storing the final result; and providing the final result to a requestor in response to a second API request is described.Type: GrantFiled: March 20, 2018Date of Patent: January 14, 2020Assignee: Amazon Technologies, Inc.Inventors: Nitin Singhal, Vivek Bhadauria, Ranju Das, Gaurav D. Ghare, Roman Goldenberg, Stephen Gould, Kuang Han, Jonathan Andrew Hedley, Gowtham Jeyabalan, Vasant Manohar, Andrea Olgiati, Stefano Stefani, Joseph Patrick Tighe, Praveen Kumar Udayakumar, Renjun Zheng
-
Publication number: 20190156124Abstract: Techniques for analyzing stored video upon a request are described. For example, a method of receiving a first application programming interface (API) request to analyze a stored video, the API request to include a location of the stored video and at least one analysis action to perform on the stored video; accessing the location of the stored video to retrieve the stored video; segmenting the accessed video into chunks; processing each chunk with a chunk processor to perform the at least one analysis action, each chunk processor to utilize at least one machine learning model in performing the at least one analysis action; joining the results of the processing of each chunk to generate a final result; storing the final result; and providing the final result to a requestor in response to a second API request is described.Type: ApplicationFiled: March 20, 2018Publication date: May 23, 2019Inventors: Nitin SINGHAL, Vivek BHADAURIA, Ranju DAS, Gaurav D. GHARE, Roman GOLDENBERG, Stephen GOULD, Kuang HAN, Jonathan Andrew HEDLEY, Gowtham JEYABALAN, Vasant MANOHAR, Andrea OLGIATI, Stefano STEFANI, Joseph Patrick TIGHE, Praveen Kumar Udayakumar, Renjun ZHANG
-
Patent number: 10223591Abstract: Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.Type: GrantFiled: March 30, 2017Date of Patent: March 5, 2019Assignee: Amazon Technologies, Inc.Inventors: Roman Goldenberg, Gerard Guy Medioni, Ofer Meidan, Ehud Benyamin Rivlin, Dilip Kumar
-
Patent number: 8103074Abstract: A method of defining a heart region from imaging data is provided. Received imaging data is projected into a first plane. A first threshold is applied to the first plane of data to eliminate data associated with air. A largest first connected component is identified from the first threshold applied data. A first center of mass of the identified largest first connected component is calculated to define a first coordinate and a second coordinate of the heart region. The received imaging data is projected into a second plane, wherein the second plane is perpendicular to the first plane. A second threshold is applied to the second plane of data to eliminate data associated with air. A largest second connected component is identified from the second threshold applied data. A second center of mass of the identified largest second connected component is calculated to define a third coordinate of the heart region.Type: GrantFiled: June 22, 2011Date of Patent: January 24, 2012Assignee: Rcadia Medical Imaging Ltd.Inventors: Grigory Begelman, Roman Goldenberg, Shai Levanon, Shay Ohayon, Eugene Walach