Patents by Inventor Raffi A. Bedikian

Raffi A. Bedikian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11586292
    Abstract: The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: February 21, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Kevin A. Horowitz, Matias Perez, Raffi Bedikian, David S. Holz, Gabriel A. Hare
  • Patent number: 11568105
    Abstract: The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: January 31, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Kevin Horowitz, Raffi Bedikian, Hua Yang
  • Patent number: 11532137
    Abstract: In some implementations, a method includes: determining first usage patterns associated with a physical object within the physical environment; obtaining a first objective for an objective-effectuator (OE) instantiated in a computer-generated reality (CGR) environment, wherein the first objective is associated with a representation of the physical object; obtaining a first directive for the OE that limits actions for performance by the OE to achieve the first objective to the first usage patterns associated with the physical object; generating first actions, for performance by the OE, in order to achieve the first objective as limited by the first directive, wherein the first set of actions corresponds to a first subset of usage patterns from the first set of usage patterns associated with the physical object; and presenting the OE performing the first actions on the representation of the physical object overlaid on the physical environment.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: December 20, 2022
    Assignee: APPLE INC.
    Inventors: Gutemberg B. Guerra Filho, Ian M. Richter, Raffi A. Bedikian
  • Patent number: 11474348
    Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: October 18, 2022
    Assignee: APPLE INC.
    Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia
  • Publication number: 20220300085
    Abstract: During control of a user interface via free-space motions of a hand or other suitable control object, switching between control modes can be facilitated by tracking the control object's movements relative to, and its penetration of, a virtual control construct (such as a virtual surface construct). The position of the virtual control construct can be updated, continuously or from time to time, based on the control object's location.
    Type: Application
    Filed: June 6, 2022
    Publication date: September 22, 2022
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Raffi BEDIKIAN, Jonathan MARSDEN, Keith MERTENS, David HOLZ
  • Patent number: 11442539
    Abstract: One implementation involves a device receiving a stream of pixel events output by an event camera. The device derives an input image by accumulating pixel events for multiple event camera pixels. The device generates a gaze characteristic using the derived input image as input to a neural network trained to determine the gaze characteristic. The neural network is configured in multiple stages. The first stage of the neural network is configured to determine an initial gaze characteristic, e.g., an initial pupil center, using reduced resolution input(s). The second stage of the neural network is configured to determine adjustments to the initial gaze characteristic using location-focused input(s), e.g., using only a small input image centered around the initial pupil center. The determinations at each stage are thus efficiently made using relatively compact neural network configurations. The device tracks a gaze of the eye based on the gaze characteristic.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: September 13, 2022
    Assignee: Apple Inc.
    Inventors: Thomas Gebauer, Raffi Bedikian
  • Publication number: 20220236808
    Abstract: A method and system are provided for controlling a machine using gestures. The method includes sensing a variation of position of a control object using an imaging system, determining, from the variation, one or more primitives describing a characteristic of a control object moving in space, comparing the one or more primitives to one or more gesture templates in a library of gesture templates, selecting, based on a result of the comparing, one or more gesture templates corresponding to the one or more primitives, and providing at least one gesture template of the selected one or more gesture templates as an indication of a command to issue to a machine under control responsive to the variation.
    Type: Application
    Filed: February 7, 2022
    Publication date: July 28, 2022
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Raffi Bedikian, Jonathan Marsden, Keith Mertens, David Holz, Maxwell Sills, Matias Perez, Gabriel Hare, Ryan Julian
  • Patent number: 11373271
    Abstract: A method includes obtaining an image via an image sensor, and identifying, within the image, a physical object represented by a portion of the image. The method includes determining, based on the image, a visual feature characterizing the physical object. The method includes warping, based on the visual feature satisfying a first feature criterion, the portion of the image according to a first warping function that is based on the first feature criterion and a distance between the electronic device and a reference point. The method includes warping, based on the visual feature satisfying a second feature criterion that is different from the first feature criterion, the portion of the image according to a second warping function that is based on the second feature criterion and the distance between the electronic device and the reference point.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: June 28, 2022
    Assignee: APPLE INC.
    Inventors: Pedro Manuel Da Silva Quelhas, Moinul Khan, Raffi A. Bedikian, Katharina Buckl, Mohamed Selim Ben Himane
  • Publication number: 20220197373
    Abstract: In one implementation, a method includes: while presenting reference CGR content, obtaining a request from a user to invoke a target state for the user; generating, based on a user model and the reference CGR content, modified CGR content to invoke the target state for the user; presenting the modified CGR content; after presenting the modified CGR content, determining a resultant state of the user; in accordance with a determination that the resultant state of the user corresponds to the target state for the user, updating the user model to indicate that the modified CGR content successfully invoked the target state for the user; and in accordance with a determination that the resultant state of the user does not correspond to the target state for the user, updating the user model to indicate that the modified CGR content did not successfully invoke the target state for the user.
    Type: Application
    Filed: March 9, 2022
    Publication date: June 23, 2022
    Inventors: Gutemberg B. Guerra Filho, Ian M. Richter, Raffi A. Bedikian
  • Patent number: 11353962
    Abstract: During control of a user interface via free-space motions of a hand or other suitable control object, switching between control modes can be facilitated by tracking the control object's movements relative to, and its penetration of, a virtual control construct (such as a virtual surface construct). The position of the virtual control construct can be updated, continuously or from time to time, based on the control object's location.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: June 7, 2022
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Raffi Bedikian, Jonathan Marsden, Keith Mertens, David Holz
  • Publication number: 20220157083
    Abstract: Various implementations disclosed herein include devices, systems, and methods that identify a gesture based on event camera data and frame-based camera data (e.g., for a CGR environment). In some implementations at an electronic device having a processor, event camera data is obtained corresponding to light (e.g., IR light) reflected from a physical environment and received at an event camera. In some implementations, frame-based camera data is obtained corresponding to light (e.g., visible light) reflected from the physical environment and received at a frame-based camera. In some implementations, a subset of the event camera data is identified based on the frame-based camera data, and a gesture (e.g., of a person in the physical environment) is identified based on the subset of event camera data. In some implementations, a path (e.g., of a hand) by tracking a grouping of blocks of event camera events in the subset of event camera data.
    Type: Application
    Filed: February 1, 2022
    Publication date: May 19, 2022
    Inventors: Sai Harsha JANDHYALA, Raffi A. BEDIKIAN
  • Patent number: 11307650
    Abstract: In one implementation, a method for generating computer-generated reality content (CGR) content in order to invoke a target state of a user based on a user model is performed at an electronic device. The method includes: while presenting reference CGR content via the one or more displays, obtaining a request from a user to invoke a target state for the user; generating, based on a user model associated with the user and the reference CGR content, modified CGR content to invoke the target state for the user, wherein the user model provides projected reactions to CGR content; and presenting, via the one or more displays, the modified CGR content. In some implementations, obtaining the request from the user to invoke the target state for the user includes determining whether the user provided informed consent to store user information in the user model associated with the user of the device.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: April 19, 2022
    Assignee: APPLE INC.
    Inventors: Gutemberg B. Guerra Filho, Ian M. Richter, Raffi A. Bedikian
  • Publication number: 20220092331
    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.
    Type: Application
    Filed: September 22, 2021
    Publication date: March 24, 2022
    Inventors: Michele Stoppa, Mohamed Selim Ben Himane, Raffi A. Bedikian
  • Publication number: 20220083880
    Abstract: The technology disclosed relates to manipulating a virtual object. In particular, it relates to detecting a hand in a three-dimensional (3D) sensory space and generating a predictive model of the hand, and using the predictive model to track motion of the hand. The predictive model includes positions of calculation points of fingers, thumb and palm of the hand. The technology disclosed relates to dynamically selecting at least one manipulation point proximate to a virtual object based on the motion tracked by the predictive model and positions of one or more of the calculation points, and manipulating the virtual object by interaction between at least some of the calculation points of the predictive model and the dynamically selected manipulation point.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 17, 2022
    Applicant: Ultrahaptics IP Two Limited
    Inventors: David S. HOLZ, Raffi BEDIKIAN, Adrian GASINSKI, Maxwell SILLS, Hua YANG, Gabriel HARE
  • Patent number: 11270409
    Abstract: A method includes obtaining an image via an image sensor. The method includes determining a first perceptual quality value that is associated with a first portion of the image. The method includes determining a first image perceptual quality warping function that is based on the first perceptual quality value and an image warping map. The first image perceptual quality warping function is characterized by a first warping granularity level that is a function of the first perceptual quality value. The method includes warping the first portion of the image according to the first image perceptual quality warping function.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: March 8, 2022
    Assignee: APPLE INC.
    Inventors: Raffi A. Bedikian, Mohamed Selim Ben Himane, Pedro Manuel Da Silva Quelhas, Moinul Khan, Katharina Buckl, Jim C. Chou, Julien Monat Rodier
  • Patent number: 11243612
    Abstract: Embodiments of display control based on dynamic user interactions generally include capturing a plurality of temporally sequential images of the user, or a body part or other control object manipulated by the user, and computationally analyzing the images to recognize a gesture performed by the user. In some embodiments, a scale indicative of an actual gesture distance traversed in performance of the gesture is identified, and a movement or action is displayed on the device based, at least in part, on a ratio between the identified scale and the scale of the displayed movement. In some embodiments, a degree of completion of the recognized gesture is determined, and the display contents are modified in accordance therewith. In some embodiments, a dominant gesture is computationally determined from among a plurality of user gestures, and an action displayed on the device is based on the dominant gesture.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: February 8, 2022
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Raffi Bedikian, Jonathan Marsden, Keith Mertens, David Holz, Maxwell Sills, Matias Perez, Gabriel Hare, Ryan Julian
  • Publication number: 20220012951
    Abstract: In some implementations, a method includes: identifying a plurality of subsets associated with a physical environment; determining a set of spatial characteristics for each of the plurality of subsets, wherein a first set of spatial characteristics characterizes dimensions of a first subset and a second set of spatial characteristics characterizes dimensions of a second subset; generating an adapted first extended reality (XR) content portion for the first subset based at least in part on the first set of spatial characteristics; generating an adapted second XR content portion for the second subset based at least in part on the second set of spatial characteristics; and generating one or more navigation options that allow a user to traverse between the first and second subsets based on the first and second sets of spatial characteristics.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Gutemberg B. Guerra Filho, Raffi A. Bedikian, Ian M. Richter
  • Publication number: 20220003994
    Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.
    Type: Application
    Filed: September 21, 2021
    Publication date: January 6, 2022
    Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia
  • Patent number: 11182685
    Abstract: The technology disclosed relates to manipulating a virtual object. In particular, it relates to detecting a hand in a three-dimensional (3D) sensory space and generating a predictive model of the hand, and using the predictive model to track motion of the hand. The predictive model includes positions of calculation points of fingers, thumb and palm of the hand. The technology disclosed relates to dynamically selecting at least one manipulation point proximate to a virtual object based on the motion tracked by the predictive model and positions of one or more of the calculation points, and manipulating the virtual object by interaction between at least some of the calculation points of the predictive model and the dynamically selected manipulation point.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: November 23, 2021
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Raffi Bedikian, Adrian Gasinski, Maxwell Sills, Hua Yang, Gabriel Hare
  • Patent number: 11150469
    Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints and determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: October 19, 2021
    Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia