Patents by Inventor Adi Raz Goldfarb
Adi Raz Goldfarb has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11651538Abstract: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.Type: GrantFiled: March 17, 2021Date of Patent: May 16, 2023Assignee: International Business Machines CorporationInventors: Adi Raz Goldfarb, Tal Drory, Oded Dubovsky
-
Patent number: 11620796Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.Type: GrantFiled: March 1, 2021Date of Patent: April 4, 2023Assignee: International Business Machines CorporationInventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, Oded Dubovsky
-
Patent number: 11501502Abstract: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.Type: GrantFiled: March 19, 2021Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nancy Anne Greco, Oded Dubovsky, Adi Raz Goldfarb, John L. Nard
-
Publication number: 20220301266Abstract: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.Type: ApplicationFiled: March 19, 2021Publication date: September 22, 2022Inventors: Nancy Anne Greco, ODED DUBOVSKY, Adi Raz Goldfarb, John L. Ward
-
Publication number: 20220301247Abstract: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.Type: ApplicationFiled: March 17, 2021Publication date: September 22, 2022Inventors: Adi Raz Goldfarb, Tal Drory, ODED DUBOVSKY
-
Publication number: 20220291981Abstract: In an approach for deducing a root cause analysis model, a processor trains a classifier based on labeled data to identify entities. A processor trains the classifier with first taxonomy and ontology. A processor uses the classifier to classify each component from one or more augmented reality peer assistance sessions into a class. A processor generates a root cause analysis model based on the identified entities and the classified components.Type: ApplicationFiled: March 9, 2021Publication date: September 15, 2022Inventors: Adi Raz Goldfarb, Oded Dubovsky, Erez Lev Meir Bilgory
-
Publication number: 20220277524Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.Type: ApplicationFiled: March 1, 2021Publication date: September 1, 2022Inventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, ODED DUBOVSKY
-
Patent number: 11361515Abstract: Generating a self-guided augmented reality (AR) session plan from a remotely-guided AR session held between a remote user and a local user, by: Receiving data recorded during the remotely-guided AR session. Segmenting the data into temporal segments that correspond to steps performed by the local user during the remotely-guided AR session. The steps are detected using at least one of: a Natural-Language Understanding (NLU) algorithm applied to a conversation included in the data, to detect utterances indicative of step-to-step transitions; location analysis of annotations included in the data, to detect location differences indicative of step-to-step transitions; and analysis of camera pose data included in the data, to detect pose transitions indicative of step-to-step transitions. Generating the self-guided AR session plan based on the segmented data and a 3D representation of the scene, the AR session plan including step-by-step AR guidance on how to perform the various steps.Type: GrantFiled: October 18, 2020Date of Patent: June 14, 2022Assignee: International Business Machines CorporationInventors: Erez Lev Meir Bilgory, Adi Raz Goldfarb
-
Publication number: 20220122327Abstract: Generating a self-guided augmented reality (AR) session plan from a remotely-guided AR session held between a remote user and a local user, by: Receiving data recorded during the remotely-guided AR session. Segmenting the data into temporal segments that correspond to steps performed by the local user during the remotely-guided AR session. The steps are detected using at least one of: a Natural-Language Understanding (NLU) algorithm applied to a conversation included in the data, to detect utterances indicative of step-to-step transitions; location analysis of annotations included in the data, to detect location differences indicative of step-to-step transitions; and analysis of camera pose data included in the data, to detect pose transitions indicative of step-to-step transitions. Generating the self-guided AR session plan based on the segmented data and a 3D representation of the scene, the AR session plan including step-by-step AR guidance on how to perform the various steps.Type: ApplicationFiled: October 18, 2020Publication date: April 21, 2022Inventors: Erez Lev Meir Bilgory, Adi Raz Goldfarb
-
Publication number: 20220083777Abstract: Receiving data recorded during a remotely-assisted augmented reality session held between a remote user and a local user, the data including: drawn graphic annotations that are associated with locations in a 3D model representing a physical scene adjacent the local user, and a transcript of a conversation between the remote and local users. Generating at least one candidate label for each location, each candidate label being textually descriptive of a physical entity that is located, in the physical scene, at a location corresponding to the respective location in the 3D model. The generation of each candidate label includes: for each graphic annotation, automatically analyzing the transcript to detect at least one potential entity name that was mentioned, by the remote and/or local user, temporally adjacent the drawing of the respective graphic annotation. Accepting or rejecting each candidate label, to define it as a true label of the respective physical entity.Type: ApplicationFiled: September 15, 2020Publication date: March 17, 2022Inventors: Adi Raz Goldfarb, Erez Lev Meir Bilgory
-
Patent number: 11275946Abstract: Receiving data recorded during a remotely-assisted augmented reality session held between a remote user and a local user, the data including: drawn graphic annotations that are associated with locations in a 3D model representing a physical scene adjacent the local user, and a transcript of a conversation between the remote and local users. Generating at least one candidate label for each location, each candidate label being textually descriptive of a physical entity that is located, in the physical scene, at a location corresponding to the respective location in the 3D model. The generation of each candidate label includes: for each graphic annotation, automatically analyzing the transcript to detect at least one potential entity name that was mentioned, by the remote and/or local user, temporally adjacent the drawing of the respective graphic annotation. Accepting or rejecting each candidate label, to define it as a true label of the respective physical entity.Type: GrantFiled: September 15, 2020Date of Patent: March 15, 2022Assignee: International Business Machines CorporationInventors: Adi Raz Goldfarb, Erez Lev Meir Bilgory
-
Patent number: 11145129Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.Type: GrantFiled: November 13, 2019Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
-
Publication number: 20210142570Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.Type: ApplicationFiled: November 13, 2019Publication date: May 13, 2021Inventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur