Patents by Inventor Oded Dubovsky
Oded Dubovsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11651538Abstract: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.Type: GrantFiled: March 17, 2021Date of Patent: May 16, 2023Assignee: International Business Machines CorporationInventors: Adi Raz Goldfarb, Tal Drory, Oded Dubovsky
-
Patent number: 11620796Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.Type: GrantFiled: March 1, 2021Date of Patent: April 4, 2023Assignee: International Business Machines CorporationInventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, Oded Dubovsky
-
Patent number: 11501502Abstract: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.Type: GrantFiled: March 19, 2021Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nancy Anne Greco, Oded Dubovsky, Adi Raz Goldfarb, John L. Nard
-
Publication number: 20220301266Abstract: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.Type: ApplicationFiled: March 19, 2021Publication date: September 22, 2022Inventors: Nancy Anne Greco, ODED DUBOVSKY, Adi Raz Goldfarb, John L. Ward
-
Publication number: 20220301247Abstract: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.Type: ApplicationFiled: March 17, 2021Publication date: September 22, 2022Inventors: Adi Raz Goldfarb, Tal Drory, ODED DUBOVSKY
-
Publication number: 20220291981Abstract: In an approach for deducing a root cause analysis model, a processor trains a classifier based on labeled data to identify entities. A processor trains the classifier with first taxonomy and ontology. A processor uses the classifier to classify each component from one or more augmented reality peer assistance sessions into a class. A processor generates a root cause analysis model based on the identified entities and the classified components.Type: ApplicationFiled: March 9, 2021Publication date: September 15, 2022Inventors: Adi Raz Goldfarb, Oded Dubovsky, Erez Lev Meir Bilgory
-
Publication number: 20220277524Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.Type: ApplicationFiled: March 1, 2021Publication date: September 1, 2022Inventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, ODED DUBOVSKY
-
Patent number: 11145129Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.Type: GrantFiled: November 13, 2019Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
-
Publication number: 20210142570Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.Type: ApplicationFiled: November 13, 2019Publication date: May 13, 2021Inventors: Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
-
Patent number: 10984341Abstract: A computer implemented method of detecting complex user activities, comprising using processor(s) in each of a plurality of consecutive time intervals for: obtaining sensory data from wearable inertial sensor(s) worn by a user, computing an action score for continuous physical action(s) performed by the user, the continuous physical action(s) extending over multiple time intervals are indicated by repetitive motion pattern(s) identified by analyzing the sensory data, computing a gesture score for brief gesture(s) performed by the user, the brief gesture(s) bounded in a single basic time interval is identified by analyzing the sensory data, aggregating the action and gesture scores to produce an interval activity score of predefined activity(s) for a current time interval, adding the interval activity score to a cumulative activity score accumulated during a predefined number of preceding time intervals and identifying the predefined activity(s) when the cumulative activity score exceeds a predefined thresholdType: GrantFiled: September 27, 2017Date of Patent: April 20, 2021Assignee: International Business Machines CorporationInventors: Oded Dubovsky, Alexander Zadorojniy, Sergey Zeltyn
-
Patent number: 10878297Abstract: Embodiments may provide visual recognition techniques that provide improved recognition accuracy and reduced use of computing resources in cases where only a small set of examples is used to train an unlimited number of recognized categories. For example, in an embodiment, a computer-implemented method of visual recognition may comprise generating a plurality of personal embedding models, each personal embedding model including categories relating to a person, and object, or a subject, wherein at least some of the personal embedding models include at least some different categories, training the plurality of personal embedding models using image training data having a limited number of examples of each category, wherein the examples of each category are used to train more than one category in more than one of the personal embedding models, recognizing images from image data using the plurality of personal embedding models, and outputting information relating to the recognized images.Type: GrantFiled: August 29, 2018Date of Patent: December 29, 2020Assignee: International Business Machines CorporationInventors: Oded Dubovsky, Leonid Karlinsky, Joseph Shtok
-
Patent number: 10712930Abstract: Electronic devices that include a force sensor input and input user interface elements are described. The force sensors may be located to detect force on the display of the electronic device. The force sensor, alone or in combination with one or more other sensors such as capacitive touch sensors, allows for interaction with the user interface input on the device. By using a hold detection logic with a pressure level sensitive sensor, user interface elements can be manipulated or values can be assigned to input elements.Type: GrantFiled: May 28, 2017Date of Patent: July 14, 2020Assignee: International Business Machines CorporationInventors: Oded Dubovsky, Yossi Mesika
-
Publication number: 20200074247Abstract: Embodiments may provide visual recognition techniques that provide improved recognition accuracy and reduced use of computing resources in cases where only a small set of examples is used to train an unlimited number of recognized categories. For example, in an embodiment, a computer-implemented method of visual recognition may comprise generating a plurality of personal embedding models, each personal embedding model including categories relating to a person, and object, or a subject, wherein at least some of the personal embedding models include at least some different categories, training the plurality of personal embedding models using image training data having a limited number of examples of each category, wherein the examples of each category are used to train more than one category in more than one of the personal embedding models, recognizing images from image data using the plurality of personal embedding models, and outputting information relating to the recognized images.Type: ApplicationFiled: August 29, 2018Publication date: March 5, 2020Inventors: ODED DUBOVSKY, LEONID KARLINSKY, JOSEPH SHTOK
-
Patent number: 10353385Abstract: A method enhances an emergency reporting system for controlling equipment. A message receiver receives an electronic message from a person. The electronic message is a report regarding an emergency event related to equipment. One or more processors identify a profile of the person who sent the electronic message, and determine a bias of the person regarding the emergency event based on the person's profile. One or more processors amend, based on the bias of the person, a content of the electronic message to create a modified electronic message regarding the emergency event. The message receiver detects that the modified electronic message came from an unauthorized source. A local controller on the equipment, in response to detecting that the modified electronic message came from the unauthorized source, automatically isolates the equipment from remote control signals for controlling the equipment.Type: GrantFiled: September 28, 2018Date of Patent: July 16, 2019Assignee: International Business Machines CorporationInventors: Aaron K. Baughman, Oded Dubovsky, James R. Kozloski, Boaz Mizrachi, Clifford A. Pickover
-
Publication number: 20190095814Abstract: A computer implemented method of detecting complex user activities, comprising using processor(s) in each of a plurality of consecutive time intervals for: obtaining sensory data from wearable inertial sensor(s) worn by a user, computing an action score for continuous physical action(s) performed by the user, the continuous physical action(s) extending over multiple time intervals are indicated by repetitive motion pattern(s) identified by analyzing the sensory data, computing a gesture score for brief gesture(s) performed by the user, the brief gesture(s) bounded in a single basic time interval is identified by analyzing the sensory data, aggregating the action and gesture scores to produce an interval activity score of predefined activity(s) for a current time interval, adding the interval activity score to a cumulative activity score accumulated during a predefined number of preceding time intervals and identifying the predefined activity(s) when the cumulative activity score exceeds a predefined thresholdType: ApplicationFiled: September 27, 2017Publication date: March 28, 2019Inventors: Oded Dubovsky, Alexander Zadorojniy, Sergey Zeltyn
-
Publication number: 20190033841Abstract: A method enhances an emergency reporting system for controlling equipment. A message receiver receives an electronic message from a person. The electronic message is a report regarding an emergency event related to equipment. One or more processors identify a profile of the person who sent the electronic message, and determine a bias of the person regarding the emergency event based on the person's profile. One or more processors amend, based on the bias of the person, a content of the electronic message to create a modified electronic message regarding the emergency event. The message receiver detects that the modified electronic message came from an unauthorized source. A local controller on the equipment, in response to detecting that the modified electronic message came from the unauthorized source, automatically isolates the equipment from remote control signals for controlling the equipment.Type: ApplicationFiled: September 28, 2018Publication date: January 31, 2019Inventors: Aaron K. Baughman, Oded Dubovsky, James R. Kozloski, Boaz Mizrachi, Clifford A. Pickover
-
Patent number: 10162345Abstract: A method enhances an emergency reporting system for controlling equipment. A message receiver receives an electronic message from a person. The electronic message is a report regarding an emergency event. One or more processors identify a profile of the person who sent the electronic message, and determine a bias of the person regarding the emergency event based on the person's profile. One or more processors amend, based on the bias of the person, a content of the electronic message to create a modified electronic message regarding the emergency event. The modified electronic message is consolidated with other modified electronic messages into a bias-corrected report about the emergency event. One or more processors then automatically adjust equipment based on the bias-corrected report about the emergency event.Type: GrantFiled: April 21, 2015Date of Patent: December 25, 2018Assignee: International Business Machines CorporationInventors: Aaron K. Baughman, Oded Dubovsky, James R. Kozloski, Boaz Mizrachi, Clifford A. Pickover
-
Publication number: 20180341384Abstract: Electronic devices that include a force sensor input and input user interface elements are described. The force sensors may be located to detect force on the display of the electronic device. The force sensor, alone or in combination with one or more other sensors such as capacitive touch sensors, allows for interaction with the user interface input on the device. By using a hold detection logic with a pressure level sensitive sensor, user interface elements can be manipulated or values can be assigned to input elements.Type: ApplicationFiled: May 28, 2017Publication date: November 29, 2018Inventors: Oded Dubovsky, Yossi Mesika
-
Patent number: 10044816Abstract: Systems and methods are provided for location-based Domain Name System (DNS) service discovery using a central DNS server in which network resources are aggregated by geographic location (e.g., subnets) and defined using DNS service discovery records that are mapped to corresponding geographic locations.Type: GrantFiled: February 14, 2017Date of Patent: August 7, 2018Assignee: International Business Machines CorporationInventors: Yoni Amishav, Eric J. Barkie, Oded Dubovsky, Benjamin L. Fletcher
-
Patent number: 10044815Abstract: Systems and methods are provided for location-based Domain Name System (DNS) service discovery using a central DNS server in which network resources are aggregated by geographic location (e.g., subnets) and defined using DNS service discovery records that are mapped to corresponding geographic locations.Type: GrantFiled: February 14, 2017Date of Patent: August 7, 2018Assignee: International Business Machines CorporationInventors: Yoni Amishav, Eric J. Barkie, Oded Dubovsky, Benjamin L. Fletcher