Patents by Inventor Yochay TZUR

Yochay TZUR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11176696
    Abstract: Embodiments provide 3D coordinates for points in a scene that are observed to be in the correct physical position in a series of images. A method may comprise obtaining a plurality of images including a base image having at least one annotated point corresponding to a point of an object shown in the base image, and a plurality of side images showing the object from different viewpoints than the base image, wherein the plurality of side images are given with the camera poses relative to the base image, extracting from at least some of the side images, image patches showing the annotated point, wherein a plurality of sets of image patches are extracted, wherein a set of image patches is extracted at a plurality of corresponding candidate depth values, classifying each set as having a corresponding candidate depth value that is correct or incorrect, and outputting a correct depth value.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: November 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Joseph Shtok, Yochay Tzur
  • Patent number: 11164037
    Abstract: A system and method for resolving an ambiguity between similar objects in an image is disclosed. A three dimensional representation of a room is generated, and objects in the room are identified from an image of the room are identified. A determination is made that at least two objects are visually similar, and a position of the two objects is ambiguous. At least one question based on the determined ambiguity is programmatically generated based on information known about the room, and is phrased such that the ambiguity can be resolved by an answer to the question. Based on the answer received one of the objects is selected. At least one property of the selected object is modified based upon the selection of one of the at least two objects.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Mattias Marder, Yochay Tzur
  • Patent number: 11145129
    Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
  • Patent number: 11080938
    Abstract: Receiving a recording of a remotely-guided augmented reality (AR) session which includes: images of a scene, captured by a camera of a local user; position and orientation data of the camera; and annotations generated by a remote user at points-of-interest (POIs) in a three-dimensional (3D) representation of the scene. Automatically generating a summary of the session, by: projecting the annotations to matching locations in some of the prominent images, based on the POIs of the annotations and on the position and orientation data of the camera and including, in the summary: the prominent images, including those of the prominent images having the projected annotations.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Yochay Tzur, Eyal Mush
  • Publication number: 20210142570
    Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 13, 2021
    Inventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
  • Patent number: 10930012
    Abstract: Embodiments of the present systems and methods may provide techniques that provide automatic, reliable performance of the point cloud cleanup task. Embodiments may provide the capability to progressively learn the object-background segmentation from tracking sessions in augmented reality (AR) applications. The 3D point cloud may be used for tracking in AR applications, by matching points from the cloud to regions in the live video. A 3D point that has many matches is more likely to be part of the object, while a 3D point that rarely has matches is more likely to be part of the background. Advantages of this approach include that no manual work is needed to do the segmentation, and the results may be constantly improved over time, as the object is being tracked in multiple environments.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventor: Yochay Tzur
  • Publication number: 20200372676
    Abstract: Embodiments of the present systems and methods may provide techniques that provide automatic, reliable performance of the point cloud cleanup task. Embodiments may provide the capability to progressively learn the object-background segmentation from tracking sessions in augmented reality (AR) applications. The 3D point cloud may be used for tracking in AR applications, by matching points from the cloud to regions in the live video. A 3D point that has many matches is more likely to be part of the object, while a 3D point that rarely has matches is more likely to be part of the background. Advantages of this approach include that no manual work is needed to do the segmentation, and the results may be constantly improved over time, as the object is being tracked in multiple environments.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 26, 2020
    Inventor: YOCHAY TZUR
  • Publication number: 20200364893
    Abstract: Embodiments provide 3D coordinates for points in a scene that are observed to be in the correct physical position in a series of images. A method may comprise obtaining a plurality of images including a base image having at least one annotated point corresponding to a point of an object shown in the base image, and a plurality of side images showing the object from different viewpoints than the base image, wherein the plurality of side images are given with the camera poses relative to the base image, extracting from at least some of the side images, image patches showing the annotated point, wherein a plurality of sets of image patches are extracted, wherein a set of image patches is extracted at a plurality of corresponding candidate depth values, classifying each set as having a corresponding candidate depth value that is correct or incorrect, and outputting a correct depth value.
    Type: Application
    Filed: May 13, 2019
    Publication date: November 19, 2020
    Inventors: JOSEPH SHTOK, YOCHAY TZUR
  • Patent number: 10832417
    Abstract: Embodiments may provide a fusion of Visual-Inertial-Odometry with Object Tracking for physically anchored Augmented Reality content presentation. For example, a method of generating augmented reality content may comprise receiving a video stream, comprising a plurality of frames, the video stream generated using a camera that is moving about a scene, determining a VIO camera pose for each frame in the video stream in a VIO coordinate system using visual inertial odometry, determining a OT camera pose for at least some of the frames of the video stream in a OT coordinate system using object tracking, determining a relation between the VIO coordinate system and the OT coordinate system and generating augmented reality content that is placed in the video stream of the scene at a location based on the determined relation between the VIO coordinate system and the OT coordinate system.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventor: Yochay Tzur
  • Patent number: 10748027
    Abstract: A method of constructing hierarchical representation models of compound 3D physical objects, comprising receiving a plurality of image sets depicting a compound 3D physical object constructed of a static component having a single representation state and dynamic component(s) having multiple representation states, each pair of the image sets corresponds depicts a respective dynamic component in one of its representation states, generating a pair of point cloud structures from each pair of image sets, analyzing the point cloud structures pair to identify common region(s) associated with the static component and region(s) of difference associated with the dynamic component, generating a hierarchical 3D representation model comprising a high level model of the static component generated from the common region(s) with a surface outline of the region(s) of difference associated with multiple lower level models constructed for the representation states of the dynamic component and outputting the hierarchical 3D repr
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: August 18, 2020
    Assignee: International Business Machines Corporation
    Inventors: Joseph Shtok, Yochay Tzur
  • Patent number: 10628703
    Abstract: Technology for matching images (for example, video images, still images) of an identical infrastructure object (for example, a tower component of a tower supporting power lines) for purposes of comparing the infrastructure object to itself at different points in time to detect a potential anomaly and the potential need for maintenance of the infrastructure object. In some embodiments, this matching of images is done using creation of a three dimensional (#D) computer model of the infrastructure object and by tagging captured images with location on the 3D model across multiple videos taken at different points in time.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: April 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Udi Barzelay, Ophir Azulai, Yochay Tzur
  • Publication number: 20200104626
    Abstract: A method of constructing hierarchical representation models of compound 3D physical objects, comprising receiving a plurality of image sets depicting a compound 3D physical object constructed of a static component having a single representation state and dynamic component(s) having multiple representation states, each pair of the image sets corresponds depicts a respective dynamic component in one of its representation states, generating a pair of point cloud structures from each pair of image sets, analyzing the point cloud structures pair to identify common region(s) associated with the static component and region(s) of difference associated with the dynamic component, generating a hierarchical 3D representation model comprising a high level model of the static component generated from the common region(s) with a surface outline of the region(s) of difference associated with multiple lower level models constructed for the representation states of the dynamic component and outputting the hierarchical 3D repr
    Type: Application
    Filed: October 2, 2018
    Publication date: April 2, 2020
    Inventors: Joseph Shtok, Yochay Tzur
  • Patent number: 10607107
    Abstract: Technology for matching images (for example, video images, still images) of an identical infrastructure object (for example, a tower component of a tower supporting power lines) for purposes of comparing the infrastructure object to itself at different points in time to detect a potential anomaly and the potential need for maintenance of the infrastructure object. In some embodiments, this matching of images is done using creation of a three dimensional (# D) computer model of the infrastructure object and by tagging captured images with location on the 3D model across multiple videos taken at different points in time.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventors: Udi Barzelay, Ophir Azulai, Yochay Tzur
  • Publication number: 20200042824
    Abstract: A system and method for resolving an ambiguity between similar objects in an image is disclosed. A three dimensional representation of a room is generated, and objects in the room are identified from an image of the room are identified. A determination is made that at least two objects are visually similar, and a position of the two objects is ambiguous. At least one question based on the determined ambiguity is programmatically generated based on information known about the room, and is phrased such that the ambiguity can be resolved by an answer to the question. Based on the answer received one of the objects is selected. At least one property of the selected object is modified based upon the selection of one of the at least two objects.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventors: Mattias Marder, Yochay Tzur
  • Publication number: 20200005077
    Abstract: Technology for matching images (for example, video images, still images) of an identical infrastructure object (for example, a tower component of a tower supporting power lines) for purposes of comparing the infrastructure object to itself at different points in time to detect a potential anomaly and the potential need for maintenance of the infrastructure object. In some embodiments, this matching of images is done using creation of a three dimensional (#D) computer model of the infrastructure object and by tagging captured images with location on the 3D model across multiple videos taken at different points in time.
    Type: Application
    Filed: September 11, 2019
    Publication date: January 2, 2020
    Inventors: Udi Barzelay, Ophir Azulai, Yochay Tzur
  • Patent number: 10467756
    Abstract: There is provided a method of computing a camera pose of a digital image, comprising: computing query-regions of a digital image, each query-region maps to training image region(s) of training image(s) by a 2D translation and/or a 2D scaling, each training image associated with a reference camera pose, each query-region associated with a center point and a computed weighted mask that weights the query-region pixels according to computed correlations with the corresponding training image region, mapping cloud points corresponding to pixels of matched training image region(s) to corresponding images pixels of the matched query-regions according to a statistically significant correlation requirement between the center point of the query-region and the matched training image region, and according to the computed weight mask, and computing the camera pose according to an aggregation of the camera poses, and the mapped cloud points and corresponding image pixels of the matched query-regions.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: November 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Leonid Karlinsky, Uzi Shvadron, Asaf Tzadok, Yochay Tzur
  • Publication number: 20190188521
    Abstract: Technology for matching images (for example, video images, still images) of an identical infrastructure object (for example, a tower component of a tower supporting power lines) for purposes of comparing the infrastructure object to itself at different points in time to detect a potential anomaly and the potential need for maintenance of the infrastructure object. In some embodiments, this matching of images is done using creation of a three dimensional (#D) computer model of the infrastructure object and by tagging captured images with location on the 3D model across multiple videos taken at different points in time.
    Type: Application
    Filed: December 19, 2017
    Publication date: June 20, 2019
    Inventors: Udi Barzelay, Ophir Azulai, Yochay Tzur
  • Patent number: 10311593
    Abstract: A system for identifying specific instances of objects in a three-dimensional (3D) scene, comprising: a camera for capturing an image of multiple objects at a site; at least one processor executable to: use a location and orientation of the camera to create a 3D model of the site including multiple instances of objects expected to be in proximity to the camera, and generate multiple candidate clusters each representing a different projection of the 3D model, detect at least two objects in the image, and determine a spatial configuration for each detected object; and match the detected image objects to one of the multiple candidate cluster using the spatial configurations, associate the detected objects with the expected object instances of the matched cluster, and retrieve information of one of the detected objects that is stored with the associated expected object instance; and a head-wearable display configured to display the information.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: June 4, 2019
    Assignee: International Business Machines Corporation
    Inventors: Benjamin M Cohen, Asaf Tzadok, Yochay Tzur
  • Publication number: 20180330504
    Abstract: There is provided a method of computing a camera pose of a digital image, comprising: computing query-regions of a digital image, each query-region maps to training image region(s) of training image(s) by a 2D translation and/or a 2D scaling, each training image associated with a reference camera pose, each query-region associated with a center point and a computed weighted mask that weights the query-region pixels according to computed correlations with the corresponding training image region, mapping cloud points corresponding to pixels of matched training image region(s) to corresponding images pixels of the matched query-regions according to a statistically significant correlation requirement between the center point of the query-region and the matched training image region, and according to the computed weight mask, and computing the camera pose according to an aggregation of the camera poses, and the mapped cloud points and corresponding image pixels of the matched query-regions.
    Type: Application
    Filed: July 12, 2017
    Publication date: November 15, 2018
    Inventors: Leonid Karlinsky, Uzi Shvadron, Asaf Tzadok, Yochay Tzur
  • Publication number: 20180137386
    Abstract: A system for identifying specific instances of objects in a three-dimensional (3D) scene, comprising: a camera for capturing an image of multiple objects at a site; at least one processor executable to: use a location and orientation of the camera to create a 3D model of the site including multiple instances of objects expected to be in proximity to the camera, and generate multiple candidate clusters each representing a different projection of the 3D model, detect at least two objects in the image, and determine a spatial configuration for each detected object; and match the detected image objects to one of the multiple candidate cluster using the spatial configurations, associate the detected objects with the expected object instances of the matched cluster, and retrieve information of one of the detected objects that is stored with the associated expected object instance; and a head-wearable display configured to display the information.
    Type: Application
    Filed: November 16, 2016
    Publication date: May 17, 2018
    Inventors: BENJAMIN M COHEN, ASAF TZADOK, YOCHAY TZUR