Patents by Inventor Adi Raz Goldfarb

Adi Raz Goldfarb has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11651538
    Abstract: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: May 16, 2023
    Assignee: International Business Machines Corporation
    Inventors: Adi Raz Goldfarb, Tal Drory, Oded Dubovsky
  • Patent number: 11620796
    Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: April 4, 2023
    Assignee: International Business Machines Corporation
    Inventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, Oded Dubovsky
  • Patent number: 11501502
    Abstract: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: November 15, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nancy Anne Greco, Oded Dubovsky, Adi Raz Goldfarb, John L. Nard
  • Publication number: 20220301266
    Abstract: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.
    Type: Application
    Filed: March 19, 2021
    Publication date: September 22, 2022
    Inventors: Nancy Anne Greco, ODED DUBOVSKY, Adi Raz Goldfarb, John L. Ward
  • Publication number: 20220301247
    Abstract: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.
    Type: Application
    Filed: March 17, 2021
    Publication date: September 22, 2022
    Inventors: Adi Raz Goldfarb, Tal Drory, ODED DUBOVSKY
  • Publication number: 20220291981
    Abstract: In an approach for deducing a root cause analysis model, a processor trains a classifier based on labeled data to identify entities. A processor trains the classifier with first taxonomy and ontology. A processor uses the classifier to classify each component from one or more augmented reality peer assistance sessions into a class. A processor generates a root cause analysis model based on the identified entities and the classified components.
    Type: Application
    Filed: March 9, 2021
    Publication date: September 15, 2022
    Inventors: Adi Raz Goldfarb, Oded Dubovsky, Erez Lev Meir Bilgory
  • Publication number: 20220277524
    Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, ODED DUBOVSKY
  • Patent number: 11361515
    Abstract: Generating a self-guided augmented reality (AR) session plan from a remotely-guided AR session held between a remote user and a local user, by: Receiving data recorded during the remotely-guided AR session. Segmenting the data into temporal segments that correspond to steps performed by the local user during the remotely-guided AR session. The steps are detected using at least one of: a Natural-Language Understanding (NLU) algorithm applied to a conversation included in the data, to detect utterances indicative of step-to-step transitions; location analysis of annotations included in the data, to detect location differences indicative of step-to-step transitions; and analysis of camera pose data included in the data, to detect pose transitions indicative of step-to-step transitions. Generating the self-guided AR session plan based on the segmented data and a 3D representation of the scene, the AR session plan including step-by-step AR guidance on how to perform the various steps.
    Type: Grant
    Filed: October 18, 2020
    Date of Patent: June 14, 2022
    Assignee: International Business Machines Corporation
    Inventors: Erez Lev Meir Bilgory, Adi Raz Goldfarb
  • Publication number: 20220122327
    Abstract: Generating a self-guided augmented reality (AR) session plan from a remotely-guided AR session held between a remote user and a local user, by: Receiving data recorded during the remotely-guided AR session. Segmenting the data into temporal segments that correspond to steps performed by the local user during the remotely-guided AR session. The steps are detected using at least one of: a Natural-Language Understanding (NLU) algorithm applied to a conversation included in the data, to detect utterances indicative of step-to-step transitions; location analysis of annotations included in the data, to detect location differences indicative of step-to-step transitions; and analysis of camera pose data included in the data, to detect pose transitions indicative of step-to-step transitions. Generating the self-guided AR session plan based on the segmented data and a 3D representation of the scene, the AR session plan including step-by-step AR guidance on how to perform the various steps.
    Type: Application
    Filed: October 18, 2020
    Publication date: April 21, 2022
    Inventors: Erez Lev Meir Bilgory, Adi Raz Goldfarb
  • Publication number: 20220083777
    Abstract: Receiving data recorded during a remotely-assisted augmented reality session held between a remote user and a local user, the data including: drawn graphic annotations that are associated with locations in a 3D model representing a physical scene adjacent the local user, and a transcript of a conversation between the remote and local users. Generating at least one candidate label for each location, each candidate label being textually descriptive of a physical entity that is located, in the physical scene, at a location corresponding to the respective location in the 3D model. The generation of each candidate label includes: for each graphic annotation, automatically analyzing the transcript to detect at least one potential entity name that was mentioned, by the remote and/or local user, temporally adjacent the drawing of the respective graphic annotation. Accepting or rejecting each candidate label, to define it as a true label of the respective physical entity.
    Type: Application
    Filed: September 15, 2020
    Publication date: March 17, 2022
    Inventors: Adi Raz Goldfarb, Erez Lev Meir Bilgory
  • Patent number: 11275946
    Abstract: Receiving data recorded during a remotely-assisted augmented reality session held between a remote user and a local user, the data including: drawn graphic annotations that are associated with locations in a 3D model representing a physical scene adjacent the local user, and a transcript of a conversation between the remote and local users. Generating at least one candidate label for each location, each candidate label being textually descriptive of a physical entity that is located, in the physical scene, at a location corresponding to the respective location in the 3D model. The generation of each candidate label includes: for each graphic annotation, automatically analyzing the transcript to detect at least one potential entity name that was mentioned, by the remote and/or local user, temporally adjacent the drawing of the respective graphic annotation. Accepting or rejecting each candidate label, to define it as a true label of the respective physical entity.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Adi Raz Goldfarb, Erez Lev Meir Bilgory
  • Patent number: 11145129
    Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
  • Publication number: 20210142570
    Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 13, 2021
    Inventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur