Patents by Inventor Raffi A. Bedikian

Raffi A. Bedikian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143871
    Abstract: The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
    Type: Application
    Filed: January 5, 2024
    Publication date: May 2, 2024
    Applicant: ULTRAHAPTICS IP TWO LIMITED
    Inventors: David S. HOLZ, Kevin HOROWITZ, Raffi BEDIKIAN, Hua YANG
  • Publication number: 20240077950
    Abstract: During control of a user interface via free-space motions of a hand or other suitable control object, switching between control modes can be facilitated by tracking the control object's movements relative to, and its penetration of, a virtual control construct (such as a virtual surface construct). The technology disclosed includes determining from the motion information whether a motion of the control object with respect to the virtual control construct is an engagement gesture, such as a virtual mouse click or other control device operation. The position of the virtual control construct can be updated, continuously or from time to time, based on the control object's location.
    Type: Application
    Filed: November 9, 2023
    Publication date: March 7, 2024
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Raffi BEDIKIAN, Jonathan MARSDEN, Keith MERTENS, David HOLZ
  • Patent number: 11914792
    Abstract: The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: February 27, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Kevin A. Horowitz, Matias Perez, Raffi Bedikian, David S. Holz, Gabriel A. Hare
  • Publication number: 20240061511
    Abstract: Embodiments of display control based on dynamic user interactions generally include capturing a plurality of temporally sequential images of the user, or a body part or other control object manipulated by the user, and computationally analyzing the images to recognize a gesture performed by the user. In some embodiments, the gesture is identified as an engagement gesture, and compared with reference gestures from a library of reference gestures. In some embodiments, a degree of completion of the recognized engagement gesture is determined, and the display contents are modified in accordance therewith. In some embodiments, a dominant gesture is computationally determined from among a plurality of user gestures, and an action displayed on the device is based on the dominant gesture.
    Type: Application
    Filed: July 7, 2023
    Publication date: February 22, 2024
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Raffi BEDIKIAN, Jonathan MARSDEN, Keith MERTENS, David HOLZ, Maxwell SILLS, Matias PEREZ, Gabriel HARE, Ryan JULIAN
  • Patent number: 11874970
    Abstract: During control of a user interface via free-space motions of a hand or other suitable control object, switching between control modes can be facilitated by tracking the control object's movements relative to, and its penetration of, a virtual control construct (such as a virtual surface construct). The position of the virtual control construct can be updated, continuously or from time to time, based on the control object's location.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: January 16, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Raffi Bedikian, Jonathan Marsden, Keith Mertens, David Holz
  • Patent number: 11868687
    Abstract: The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: January 9, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Kevin Horowitz, Raffi Bedikian, Hua Yang
  • Patent number: 11861873
    Abstract: One implementation involves a device receiving a stream of pixel events output by an event camera. The device derives an input image by accumulating pixel events for multiple event camera pixels. The device generates a gaze characteristic using the derived input image as input to a neural network trained to determine the gaze characteristic. The neural network is configured in multiple stages. The first stage of the neural network is configured to determine an initial gaze characteristic, e.g., an initial pupil center, using reduced resolution input(s). The second stage of the neural network is configured to determine adjustments to the initial gaze characteristic using location-focused input(s), e.g., using only a small input image centered around the initial pupil center. The determinations at each stage are thus efficiently made using relatively compact neural network configurations. The device tracks a gaze of the eye based on the gaze characteristic.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: January 2, 2024
    Assignee: Apple Inc.
    Inventors: Thomas Gebauer, Raffi Bedikian
  • Publication number: 20230419439
    Abstract: Various implementations disclosed herein include a method performed at an electronic device including one or more processors, a non-transitory memory, an image sensor, and a display device. The method includes obtaining, via the image sensor, an input image that includes an object. The method includes obtaining depth information characterizing the object, wherein the depth information characterizes a first distance between the image sensor and a portion of the object. The method includes determining a distance warp map for the input image based on a function of the depth information and a first offset value characterizing an estimated distance between eyes of a user and the display device. The method includes setting an operational parameter for the electronic device based on the distance warp map and generating, by the electronic device set to the operational parameter, a warped image from the input image.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Inventors: Tobias Eble, Pedro Manuel Da Silva Quelhas, Raffi A. Bedikian
  • Patent number: 11854308
    Abstract: The technology disclosed also initializes a new hand that enters the field of view of a gesture recognition system using a parallax detection module. The parallax detection module determines candidate regions of interest (ROI) for a given input hand image and computes depth, rotation and position information for the candidate ROI. Then, for each of the candidate ROI, an ImagePatch, which includes the hand, is extracted from the original input hand image to minimize processing of low-information pixels. Further, a hand classifier neural network is used to determine which ImagePatch most resembles a hand. For the qualified, most-hand like ImagePatch, a 3D virtual hand is initialized with depth, rotation and position matching that of the qualified ImagePatch.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: December 26, 2023
    Assignee: ULTRAHAPTICS IP TWO LIMITED
    Inventors: Jonathan Marsden, Raffi Bedikian, David Samuel Holz
  • Patent number: 11854242
    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: December 26, 2023
    Assignee: Apple Inc.
    Inventors: Michele Stoppa, Mohamed Selim Ben Himane, Raffi A. Bedikian
  • Patent number: 11841920
    Abstract: The technology disclosed introduces two types of neural networks: “master” or “generalists” networks and “expert” or “specialists” networks. Both, master networks and expert networks, are fully connected neural networks that take a feature vector of an input hand image and produce a prediction of the hand pose. Master networks and expert networks differ from each other based on the data on which they are trained. In particular, master networks are trained on the entire data set. In contrast, expert networks are trained only on a subset of the entire dataset. In regards to the hand poses, master networks are trained on the input image data representing all available hand poses comprising the training data (including both real and simulated hand images).
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: December 12, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Jonathan Marsden, Raffi Bedikian, David Samuel Holz
  • Patent number: 11783444
    Abstract: Various implementations disclosed herein include a method performed at an electronic device including one or more processors, a non-transitory memory, an image sensor, and a display device. The method includes obtaining, via the image sensor, an input image that includes an object. The method includes obtaining depth information characterizing the object, wherein the depth information characterizes a first distance between the image sensor and a portion of the object. The method includes determining a distance warp map for the input image based on a function of the depth information and a first offset value characterizing an estimated distance between eyes of a user and the display device. The method includes setting an operational parameter for the electronic device based on the distance warp map and generating, by the electronic device set to the operational parameter, a warped image from the input image.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: October 10, 2023
    Assignee: APPLE INC.
    Inventors: Tobias Eble, Pedro Manuel Da Silva Quelhas, Raffi A. Bedikian
  • Publication number: 20230314798
    Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.
    Type: Application
    Filed: October 7, 2022
    Publication date: October 5, 2023
    Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia
  • Patent number: 11740705
    Abstract: A method and system are provided for controlling a machine using gestures. The method includes sensing a variation of position of a control object using an imaging system, determining, from the variation, one or more primitives describing a characteristic of a control object moving in space, comparing the one or more primitives to one or more gesture templates in a library of gesture templates, selecting, based on a result of the comparing, one or more gesture templates corresponding to the one or more primitives, and providing at least one gesture template of the selected one or more gesture templates as an indication of a command to issue to a machine under control responsive to the variation.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: August 29, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Raffi Bedikian, Jonathan Marsden, Keith Mertens, David Holz, Maxwell Sills, Matias Perez, Gabriel Hare, Ryan Julian
  • Patent number: 11714880
    Abstract: The technology disclosed performs hand pose estimation on a so-called “joint-by-joint” basis. So, when a plurality of estimates for the 28 hand joints are received from a plurality of expert networks (and from master experts in some high-confidence scenarios), the estimates are analyzed at a joint level and a final location for each joint is calculated based on the plurality of estimates for a particular joint. This is a novel solution discovered by the technology disclosed because nothing in the field of art determines hand pose estimates at such granularity and precision. Regarding granularity and precision, because hand pose estimates are computed on a joint-by-joint basis, this allows the technology disclosed to detect in real time even the minutest and most subtle hand movements, such a bend/yaw/tilt/roll of a segment of a finger or a tilt an occluded finger, as demonstrated supra in the Experimental Results section of this application.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: August 1, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Jonathan Marsden, Raffi Bedikian, David Samuel Holz
  • Patent number: 11703944
    Abstract: In one implementation, a method includes: while presenting reference CGR content, obtaining a request from a user to invoke a target state for the user; generating, based on a user model and the reference CGR content, modified CGR content to invoke the target state for the user; presenting the modified CGR content; after presenting the modified CGR content, determining a resultant state of the user; in accordance with a determination that the resultant state of the user corresponds to the target state for the user, updating the user model to indicate that the modified CGR content successfully invoked the target state for the user; and in accordance with a determination that the resultant state of the user does not correspond to the target state for the user, updating the user model to indicate that the modified CGR content did not successfully invoke the target state for the user.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: July 18, 2023
    Assignee: APPLE INC.
    Inventors: Gutemberg B. Guerra Filho, Ian M. Richter, Raffi A. Bedikian
  • Publication number: 20230214458
    Abstract: The technology disclosed performs hand pose estimation on a so-called “joint-by-joint” basis. So, when a plurality of estimates for the 28 hand joints are received from a plurality of expert networks (and from master experts in some high-confidence scenarios), the estimates are analyzed at a joint level and a final location for each joint is calculated based on the plurality of estimates for a particular joint. This is a novel solution discovered by the technology disclosed because nothing in the field of art determines hand pose estimates at such granularity and precision. Regarding granularity and precision, because hand pose estimates are computed on a joint-by joint basis, this allows the technology disclosed to detect in real time even the minutest and most subtle hand movements, such a bend/yaw/tilt/roll of a segment of a finger or a tilt an occluded finger, as demonstrated supra in the Experimental Results section of this application.
    Type: Application
    Filed: July 10, 2019
    Publication date: July 6, 2023
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Jonathan MARSDEN, Raffi BEDIKIAN, David Samuel HOLZ
  • Publication number: 20230205321
    Abstract: The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual.
    Type: Application
    Filed: February 17, 2023
    Publication date: June 29, 2023
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Kevin A. HOROWITZ, Matias PEREZ, Raffi BEDIKIAN, David S. HOLZ, Gabriel A. HARE
  • Publication number: 20230169236
    Abstract: The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 1, 2023
    Applicant: Ultrahaptics IP Two Limited
    Inventors: David S. HOLZ, Kevin HOROWITZ, Raffi BEDIKIAN, Hua YANG
  • Publication number: 20230136669
    Abstract: One implementation involves a device receiving a stream of pixel events output by an event camera. The device derives an input image by accumulating pixel events for multiple event camera pixels. The device generates a gaze characteristic using the derived input image as input to a neural network trained to determine the gaze characteristic. The neural network is configured in multiple stages. The first stage of the neural network is configured to determine an initial gaze characteristic, e.g., an initial pupil center, using reduced resolution input(s). The second stage of the neural network is configured to determine adjustments to the initial gaze characteristic using location-focused input(s), e.g., using only a small input image centered around the initial pupil center. The determinations at each stage are thus efficiently made using relatively compact neural network configurations. The device tracks a gaze of the eye based on the gaze characteristic.
    Type: Application
    Filed: August 5, 2022
    Publication date: May 4, 2023
    Inventors: Thomas GEBAUER, Raffi BEDIKIAN