Patents by Inventor Moshe Bouhnik

Moshe Bouhnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127468
    Abstract: Various of the disclosed embodiments contemplate systems and methods for assessing structural complexity within an intra-surgical environment. For example, in some embodiments, surface characteristics from three-dimensional models of a patient interior, such as a colon, bronchial tube, esophagus, etc. may be used to infer the surface's level of complexity. Once determined, complexity may inform a number of downstream operations, such as assisting surgical operators to identify complex regions requiring more thorough review, the automated recognition of healthy or unhealthy tissue states, etc. While some embodiments apply to generally cylindrical internal structures, such as a colon or branching pulmonary pathways, etc., other embodiments may be used within other structures, such as inflated laparoscopic regions between organs, joints, etc. Various embodiments also consider graphical and feedback indicia for representing the complexity assessments.
    Type: Application
    Filed: October 9, 2023
    Publication date: April 18, 2024
    Inventors: Erez Posner, Moshe Bouhnik, Daniel Dobkin, Netanel Frank, Liron Leist, Emmanuelle Muhlethaler, Roee Shibolet, Aniruddha Tamhane, Adi Zholkover
  • Publication number: 20240106998
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing miscalibration detection. One of the methods includes receiving sensor data from each of multiple sensors of a device in a system configured to provide augmented reality or mixed reality output to a user. Feature values are determined based on the sensor data for a predetermined set of features. The determined feature values are processed using a miscalibration detection model that has been trained, based on examples of captured sensor data from one or more devices, to predict whether a miscalibration condition of one or more of the multiple sensors has occurred. Based on the output of the miscalibration detection model, the system determines whether to initiate recalibration of extrinsic parameters for at least one of the multiple sensors or to bypass recalibration.
    Type: Application
    Filed: November 29, 2021
    Publication date: March 28, 2024
    Inventors: Gil SOKOL, Moshe BOUHNIK, Ankur GUPTA, David GADOT KABASU, Konstantinos Zampogiannis
  • Publication number: 20230108794
    Abstract: A portable electronic system receives a set of one or more canonical maps and determines the sparse map based at least in part upon one or more anchors pertaining to the physical environment. The sparse map is localized to at least one canonical map in the set of one or more canonical maps, and a new canonical map is created at least by merging sparse map data of the sparse map into the at least one canonical map. The set of one or more canonical maps may be determined from a universe of canonical maps comprising a plurality of canonical maps by applying a hierarchical filtering scheme to the universe. The sparse map may be localized to the at least one canonical map at least by splitting the sparse map into a plurality of connected components and by one or more merger operations.
    Type: Application
    Filed: November 17, 2022
    Publication date: April 6, 2023
    Applicant: MAGIC LEAP, INC.
    Inventors: Moshe BOUHNIK, Ben WEISBIH, Miguel Andres GRANADOS VELASQUEZ, Ali SHAHROKNI, Ashwin SWAMINATHAN
  • Patent number: 11532124
    Abstract: A cross reality system receives tracking information in a tracking map and first location metadata associated with at least a portion of the tracking map. A sub-portion of a canonical map is determined based at least in part on a correspondence between the first location metadata associated with the at least the portion of the tracking map and second location metadata associated with the sub-portion of the canonical map. The sub-portion of the canonical map may be merged with the at least the portion of the tracking map into a merged map. The cross reality system may further generate the tracking map by using at least the pose information from one or more images and localize the tracking map to the canonical may at least by using a persistent coordinate frame in the canonical map and the location metadata associated with the location represented in the tracking map.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: December 20, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Moshe Bouhnik, Ben Weisbih, Miguel Andres Granados Velasquez, Ali Shahrokni, Ashwin Swaminathan
  • Publication number: 20210279953
    Abstract: A cross reality system receives tracking information in a tracking map and first location metadata associated with at least a portion of the tracking map. A sub-portion of a canonical map is determined based at least in part on a correspondence between the first location metadata associated with the at least the portion of the tracking map and second location metadata associated with the sub-portion of the canonical map. The sub-portion of the canonical map may be merged with the at least the portion of the tracking map into a merged map. The cross reality system may further generate the tracking map by using at least the pose information from one or more images and localize the tracking map to the canonical may at least by using a persistent coordinate frame in the canonical map and the location metadata associated with the location represented in the tracking map.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 9, 2021
    Applicant: MAGIC LEAP, INC.
    Inventors: Moshe Bouhnik, Ben Weisbih, Miguel Andres Granados Velasquez, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 10860190
    Abstract: Techniques for presenting and interacting with composite images on a computing device are described. In an example, the device presents a first article and a first portion of a first composite image showing a second article. The first composite image shows a first outfit combination that is different from the first outfit. The device receives a first user interaction indicating a request to change the second article and presents the first article and a second portion of a second composite image showing a clothing article in a second outfit. The second composite image shows a second outfit combination that is different from the first outfit, the second outfit, and the first outfit combination. The device receives a second user interaction indicating a request to use the third article and presents the second composite image showing the second outfit combination.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: December 8, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Nicholas Robert Ditzler, Lee David Thompson, Devesh Sanghvi, Hilit Unger, Moshe Bouhnik, Siddharth Jacob Thazhathu, Anton Fedorenko
  • Patent number: 10580453
    Abstract: A system and method for determining video clips including interesting content from video data. The system may receive annotation data identifying time and positions corresponding to objects represented in the video data and the system may determine priority metrics associated with each of the objects. By associating the priority metrics with the time and positions corresponding to the objects, the system may generate a priority metric map indicating a time and position of interesting moments in the video data. The system may generate moments and/or video clips based on the priority metric map. The system may determine a time (e.g., video frames) and/or space (e.g., pixel coordinates) associated with the moments/video clips and may simulate camera motion such as panning and/or zooming with the moments/video clips. The system may generate a Master Clip Table including the moments, video clips and/or annotation data associated with the moments/video clips.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: March 3, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Matthew Alan Townsend, Moshe Bouhnik, Konstantin Kraimer, Eduard Oks
  • Patent number: 10540757
    Abstract: A computer-implemented method includes receiving first pose data for a first human represented in a first image, receiving second pose data for a second human represented in a second image, receiving first semantic segmentation data for the first image, and receiving second semantic segmentation data for the second image. A pose-aligned second image can be generated by modifying the second image based on the first pose data, the second pose data, the first semantic segmentation data, and the second semantic segmentation data. A mixed image can be determined by combining pixel values from the first image and pixel values of the pose-aligned second image according to mask data. In some embodiments, the mixed image includes a representation of an outfit that includes first clothing represented in the first image and second clothing represented in the second image.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: January 21, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Moshe Bouhnik, Hilit Unger, Eduard Oks, Noam Sorek
  • Patent number: 9620168
    Abstract: A system and method for determining video clips including interesting content from video data. The system may receive annotation data identifying time and positions corresponding to objects represented in the video data and the system may determine priority metrics associated with each of the objects. By associating the priority metrics with the time and positions corresponding to the objects, the system may generate a priority metric map indicating a time and position of interesting moments in the video data. The system may generate moments and/or video clips based on the priority metric map. The system may determine a time (e.g., video frames) and/or space (e.g., pixel coordinates) associated with the moments/video clips and may simulate camera motion such as panning and/or zooming with the moments/video clips. The system may generate a Master Clip Table including the moments, video clips and/or annotation data associated with the moments/video clips.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: April 11, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Alan Townsend, Moshe Bouhnik, Konstantin Kraimer, Eduard Oks