Patents by Inventor Javier Fernandez Rico

Javier Fernandez Rico has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11947719
    Abstract: A method for predicting eye movement in a head mounted display (HMD). The method including tracking movement of an eye of a user with a gaze tracking system disposed in the HMD at a plurality of sample points. The method including determining velocity of the movement based on the movement of the eye. The method including determining that the eye of the user is in a saccade upon the velocity reaching a threshold velocity. The method including predicting a landing point on the display of the HMD corresponding to a direction of the eye for the saccade.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: April 2, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Andrew Young, Javier Fernandez Rico
  • Patent number: 11934244
    Abstract: A warning is generated when a computer simulation controller is determined to have insufficient charge to permit use through an upcoming simulation sequence. Thus, responsive to a computer simulation having a first context and a computer simulation controller having a first voltage, a human-perceptible indication of low voltage is presented, whereas if the computer simulation has a second context typically requiring less input than the first context, no indication is presented if the controller has the same first voltage.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: March 19, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn Black, Michael Taylor, Javier Fernandez Rico
  • Patent number: 11911704
    Abstract: Methods and systems are provided for providing real world assistance by a robot utility and interface device (RUID) are provided. A method provides for identifying a position of a user in a physical environment and a surface within the physical environment for projecting an interactive interface. The method also provides for moving to a location within the physical environment based on the position of the user and the surface for projecting the interactive interface. Moreover, the method provides for capturing a plurality of images of the interactive interface while the interactive interface is being interacted with by the use and for determining a selection of an input option made by the user.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: February 27, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Javier Fernandez Rico, Erik Beran, Michael Taylor, Ruxin Chen
  • Patent number: 11869237
    Abstract: An autonomous personal companion utilizing a method of object identification that relies on a hierarchy of object classifiers for categorizing one or more objects in a scene. The classifier hierarchy is composed of a set of root classifiers trained to recognize objects based on separate generic classes. Each root acts as the parent of a tree of child nodes, where each child node contains a more specific variant of its parent object classifier. The method covers walking the tree in order to classify an object based on more and more specific object features. The system is further comprised of an algorithm designed to minimize the number of object comparisons while allowing the system to concurrently categorize multiple objects in a scene.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: January 9, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sergey Bashkirov, Michael Taylor, Javier Fernandez-Rico
  • Publication number: 20240004466
    Abstract: A method for updating information for a graphics pipeline including executing in the first frame period an application on a CPU to generate primitives of a scene for a first video frame. Gaze tracking information is received in a second frame period for an eye of a user. In the second frame period a landing point on an HMD display is predicted at the CPU based at least on the gaze tracking information. A late update of the predicted landing point to a buffer accessible by the GPU is performed in the second frame period. Shader operations are performed in the GPU in the second frame period to generate pixel data based on the primitives and based on the predicted landing point, wherein the pixel data is stored into a frame buffer. The pixel data is scanned out in the third frame period from the frame buffer to the HMD.
    Type: Application
    Filed: September 19, 2023
    Publication date: January 4, 2024
    Inventors: Andrew Young, Javier Fernandez Rico
  • Patent number: 11801446
    Abstract: A method for training a character for a game is described. The method includes facilitating a display of one or more scenes of the game. The one or more scenes include the character and virtual objects. The method further includes receiving input data for controlling the character by a user to interact with the virtual objects and analyzing the input data to identify interaction patterns for the character in the one or more scenes. The interaction patterns define inputs to train an artificial intelligence (AI) model associated with a user account of the user. The method includes enabling the character to interact with a new scene based on the AI model. The method includes tracking the interaction with the new scene by the character to perform additional training of the AI model.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 31, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Javier Fernandez Rico, Glenn Black
  • Patent number: 11762461
    Abstract: A method for updating information for a graphics pipeline including executing in the first frame period an application on a CPU to generate primitives of a scene for a first video frame. Gaze tracking information is received in a second frame period for an eye of a user. In the second frame period a landing point on an HMD display is predicted at the CPU based at least on the gaze tracking information. A late update of the predicted landing point to a buffer accessible by the GPU is performed in the second frame period. Shader operations are performed in the GPU in the second frame period to generate pixel data based on the primitives and based on the predicted landing point, wherein the pixel data is stored into a frame buffer. The pixel data is scanned out in the third frame period from the frame buffer to the HMD.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: September 19, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Andrew Young, Javier Fernandez Rico
  • Publication number: 20230192104
    Abstract: The present technology pertains to generating hybrid scenario information for use during closed course testing of an autonomous vehicle (AV). Such hybrid scenario information may be generated by combining sensor data from an AV, simulated environment information, simulated object information, and information obtained using perceivable marker information of a perceivable marker perceived by the AV in the closed course environment. The hybrid scenario information may be provided to the AV during the closed course testing. The AV may respond to the hybrid scenario information by performing one or more AV control actions.
    Type: Application
    Filed: December 21, 2021
    Publication date: June 22, 2023
    Inventors: Nestor Grace, Javier Fernandez Rico, Dogan Gidon, Diego Plascencia-Vega
  • Publication number: 20230150529
    Abstract: Systems and methods for dynamic sensor data adaptation using a deep learning loop are provided. A method includes classifying, using a discriminator model, a first object from first sensor data associated with a first sensing condition, wherein the discriminator model is trained for a second sensing condition different from the first sensing condition; generating, using a generator model in response to the discriminator model failing to classify the first object, second sensor data representing a second object comprising at least a modified element of the first object; classifying, using the discriminator model, the second object from the second sensor data; and adapting, based at least in part on a difference between the first object and the second object in response to the discriminator model successfully classifying the second object, a machine learning model associated with object classification for the first sensing condition.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Applicant: GM Cruise Holdings LLC
    Inventors: Richard Stenson, Javier Fernandez Rico
  • Patent number: 11628368
    Abstract: Gamer friend devices can be detected by a gaming console and the gaming console user can be prompted to add the friend devices to a friend list, which automatically logs the friends on to the console without requiring them to manually log in. The gaming console may also be automatically configured with profile information of the friends.
    Type: Grant
    Filed: January 3, 2021
    Date of Patent: April 18, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Javier Fernandez Rico
  • Patent number: 11590416
    Abstract: “Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: February 28, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20230044972
    Abstract: A method for predicting eye movement in a head mounted display (HMD). The method including tracking movement of an eye of a user with a gaze tracking system disposed in the HMD at a plurality of sample points. The method including determining velocity of the movement based on the movement of the eye. The method including determining that the eye of the user is in a saccade upon the velocity reaching a threshold velocity. The method including predicting a landing point on the display of the HMD corresponding to a direction of the eye for the saccade.
    Type: Application
    Filed: October 18, 2022
    Publication date: February 9, 2023
    Inventors: Andrew Young, Javier Fernandez Rico
  • Patent number: 11568265
    Abstract: An autonomous personal companion executing a method including capturing data related to user behavior. Patterns of user behavior are identified in the data and classified using predefined patterns associated with corresponding predefined tags to generate a collected set of one or more tags. The collected set is compared to sets of predefined tags of a plurality of scenarios, each to one or more predefined patterns of user behavior and a corresponding set of predefined tags. A weight is assigned to each of the sets of predefined tags, wherein each weight defines a corresponding match quality between the collected set of tags and a corresponding set of predefined tags. The sets of predefined tags are sorted by weight in descending order. A matched scenario is selected for the collected set of tags that is associated with a matched set of predefined tags having a corresponding weight having the highest match quality.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: January 31, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Javier Fernandez-Rico, Sergey Bashkirov, Jaekwon Yoo, Ruxin Chen
  • Patent number: 11565175
    Abstract: Force feedback is applied to a computer simulation controller, such as to a joystick element or analog trigger of the controller, depending on the context of the simulation.
    Type: Grant
    Filed: May 15, 2021
    Date of Patent: January 31, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn Black, Michael Taylor, Javier Fernandez Rico
  • Patent number: 11474599
    Abstract: A method for predicting eye movement in a head mounted display (HMD). The method including tracking movement of an eye of a user with a gaze tracking system disposed in the HMD at a plurality of sample points. The method including determining velocity of the movement based on the movement of the eye. The method including determining that the eye of the user is in a saccade upon the velocity reaching a threshold velocity. The method including predicting a landing point on the display of the HMD corresponding to a direction of the eye for the saccade.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: October 18, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Andrew Young, Javier Fernandez Rico
  • Patent number: 11474620
    Abstract: A computer controller when in a normal orientation causes a first context of simulation play to be implemented. When inverted, the controller causes a second, different context of simulation play to be implemented.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: October 18, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn Black, Michael Taylor, Javier Fernandez Rico
  • Patent number: 11417042
    Abstract: Methods and systems are provided for generating animation for non-player characters (NPCs) in a game. The method includes operations for examining a scene for an NPC that is providing voice output. The method further includes operations for examining the voice output to identify am intensity modulation of the voice output. In addition, the method further includes processing the intensity modulation to predict body language signals (BLS) for the NPC. Moreover, the BLS is used to cause features of the NPC to react consistent with an emotion content of the intensity modulation identified in the voice output.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 16, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Javier Fernandez Rico
  • Publication number: 20220203248
    Abstract: Methods and systems are provided for providing real world assistance by a robot utility and interface device (RUID) are provided. A method provides for identifying a position of a user in a physical environment and a surface within the physical environment for projecting an interactive interface. The method also provides for moving to a location within the physical environment based on the position of the user and the surface for projecting the interactive interface. Moreover, the method provides for capturing a plurality of images of the interactive interface while the interactive interface is being interacted with by the use and for determining a selection of an input option made by the user.
    Type: Application
    Filed: January 11, 2022
    Publication date: June 30, 2022
    Inventors: Javier Fernandez Rico, Erik Beran, Michael Taylor, Ruxin Chen
  • Publication number: 20220179489
    Abstract: A method for updating information for a graphics pipeline including executing in the first frame period an application on a CPU to generate primitives of a scene for a first video frame. Gaze tracking information is received in a second frame period for an eye of a user. In the second frame period a landing point on an HMD display is predicted at the CPU based at least on the gaze tracking information. A late update of the predicted landing point to a buffer accessible by the GPU is performed in the second frame period. Shader operations are performed in the GPU in the second frame period to generate pixel data based on the primitives and based on the predicted landing point, wherein the pixel data is stored into a frame buffer. The pixel data is scanned out in the third frame period from the frame buffer to the HMD.
    Type: Application
    Filed: February 23, 2022
    Publication date: June 9, 2022
    Inventors: Andrew Young, Javier Fernandez Rico
  • Publication number: 20220101300
    Abstract: Image data from two different devices is used to identify a physical interaction between two users to authenticate a digital interaction between the users.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico