Patents by Inventor Javier Fernandez Rico

Javier Fernandez Rico has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200265411
    Abstract: Image data from two different devices is used to identify a physical interaction between two users to authenticate a digital interaction between the users.
    Type: Application
    Filed: May 8, 2020
    Publication date: August 20, 2020
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Patent number: 10732710
    Abstract: Player-to-player eye contact is used to establish a private chat channel in an augmented reality (AR) or virtual reality (VR) setting. Since maintaining eye contact requires agreement from both parties, it allows both players an equal amount of control when performing the mutual action. Eye tracking may be used for determining whether mutual eye contact has been established. In the case of AR, “inside out” eye tracking can be used, whereas in a VR setting only inside eye tracking need be used. Techniques are described to confirm and establish a channel once eye contact has been held.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: August 4, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20200171380
    Abstract: “Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
    Type: Application
    Filed: February 3, 2020
    Publication date: June 4, 2020
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Patent number: 10664825
    Abstract: Image data from two different devices is used to identify a physical interaction between two users to authenticate a digital interaction between the users.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: May 26, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Patent number: 10657701
    Abstract: Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: May 19, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Steven Osman, Javier Fernandez Rico, Ruxin Chen
  • Patent number: 10549186
    Abstract: “Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: February 4, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20200034610
    Abstract: Image data from two different devices is used to identify a physical interaction between two users to authenticate a digital interaction between the users.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 30, 2020
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20190391637
    Abstract: Player-to-player eye contact is used to establish a private chat channel in an augmented reality (AR) or virtual reality (VR) setting. Since maintaining eye contact requires agreement from both parties, it allows both players an equal amount of control when performing the mutual action. Eye tracking may be used for determining whether mutual eye contact has been established. In the case of AR, “inside out” eye tracking can be used, whereas in a VR setting only inside eye tracking need be used. Techniques are described to confirm and establish a channel once eye contact has been held.
    Type: Application
    Filed: June 26, 2018
    Publication date: December 26, 2019
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20190388781
    Abstract: “Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
    Type: Application
    Filed: June 26, 2018
    Publication date: December 26, 2019
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20190392641
    Abstract: An augmented reality (AR) system adapts sounds based on the physical properties of real world materials found in a user's environment through material classification to sound modification. The sounds may be emulated sounds of virtual objects being emulated to impact real world objects in the AR space.
    Type: Application
    Filed: June 26, 2018
    Publication date: December 26, 2019
    Inventors: Michael Taylor, Glenn Black, Javier Fernandez Rico
  • Publication number: 20190354173
    Abstract: A method for predicting eye movement in a head mounted display (HMD). The method including tracking movement of an eye of a user with a gaze tracking system disposed in the HMD at a plurality of sample points. The method including determining velocity of the movement based on the movement of the eye. The method including determining that the eye of the user is in a saccade upon the velocity reaching a threshold velocity. The method including predicting a landing point on the display of the HMD corresponding to a direction of the eye for the saccade.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 21, 2019
    Inventors: Andrew Young, Javier Fernandez Rico
  • Publication number: 20190354174
    Abstract: A method for updating information for a graphics pipeline including executing in the first frame period an application on a CPU to generate primitives of a scene for a first video frame. Gaze tracking information is received in a second frame period for an eye of a user. In the second frame period a landing point on an HMD display is predicted at the CPU based at least on the gaze tracking information. A late update of the predicted landing point to a buffer accessible by the GPU is performed in the second frame period. Shader operations are performed in the GPU in the second frame period to generate pixel data based on the primitives and based on the predicted landing point, wherein the pixel data is stored into a frame buffer. The pixel data is scanned out in the third frame period from the frame buffer to the HMD.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 21, 2019
    Inventors: Andrew Young, Javier Fernandez Rico
  • Patent number: 10456682
    Abstract: An autonomous personal companion executing a method of projection including tracking within a reference system a current position and orientation of a gaming controller used for controlling game play of a gaming application played by a user, wherein the personal companion is configured to provide services to the user. The method includes receiving information related to the gaming application from a gaming console supporting the game play. The method includes generating content based on the information related to the gaming application. The method includes moving the autonomous personal companion to a location having a direct line of sight to the controller. The method includes projecting the content from the autonomous personal companion to a surface of the gaming controller.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: October 29, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Jeffrey Roger Stafford, Javier Fernandez-Rico
  • Patent number: 10430018
    Abstract: Systems and methods for providing user tagging of content within a virtual scene are described. One of the methods includes sending data for display of a virtual environment on a head-mounted display. The virtual environment includes a virtual item. The method further includes receiving an indication of a selection associated with the virtual item. The method includes sending option data for allowing entry of content regarding the virtual item upon receiving the indication of the selection, receiving the content, associating the content with the virtual item, and sending tagged data for displaying a tag associated with the virtual item.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: October 1, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn Black, Jeffrey Roger Stafford, Javier Fernandez Rico
  • Patent number: 10416669
    Abstract: A system can determine that a simulated (e.g., virtual) weapon has hit a UAV. This can include determining a simulated position of the simulated weapon and the relative position of the UAV. The system can then determine an area of effect for the weapon and determine if the UAV was hit. The system can determine what components of the UAV were hit and to what degree. The system can then determine a simulated condition such as a damaged engine or wing. The system can then receive input for controlling the UAV and combine the input with the simulated condition to generate a control signal for the UAV. The control signal can include decreasing the power to a motor.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: September 17, 2019
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Javier Fernandez Rico, Glenn Black, Michael Taylor, Andrew Stephen Young
  • Patent number: 10388071
    Abstract: A method, system, computer readable media and cloud systems are provided for adjusting image data presented in a head mounted display (HMD). One method includes executing a virtual reality (VR) session for an HMD user. The VR session is configured to present image data to a display of the HMD. The image data is for a VR environment that includes a VR user controlled by the HMD user. The method further includes adjusting the image data presented on the display of the HMD with the cadence profile when the VR user is moved in the VR environment by the HMD user. The adjusting causes a movement of a camera view for the image data that is for the VR environment as presented on the display of the HMD. In some examples, the cadence profile substantially replicates a rhythmic movement of a person while moving in a real world environment.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: August 20, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Javier Fernandez Rico
  • Patent number: 10377484
    Abstract: Systems and methods for unmanned aerial vehicle (UAV) positional anchors. Signals may be broadcast via a signal interface of an anchor in a defined space which also includes a UAV. The UAV is at one location within the defined space, and the anchor is at another location within the defined space. A virtual environment may be generated that corresponds to the defined space. The virtual environment may include at least one virtual element, and a location of the virtual element within the virtual environment may be based on the location of the anchor within the defined space. A visual indication may be generated when the UAV is detected within a predetermined distance from the location of the anchor. In some embodiments, a visual element may be generated to augment the anchor where a location of the visual element is based on a location of the anchor within the defined space. The visual element may be changed when the UAV is flown to the location of the anchor within the defined space.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: August 13, 2019
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Michael Taylor, Dennis Dale Castleman, Glenn Black, Javier Fernandez Rico
  • Publication number: 20190245812
    Abstract: Methods and systems for using a robot to interact in a social media includes receiving a request for registering the robot in the social media from a user. In response to the request, user profile of the user is retrieved. The user profile of the user identifies privileges assigned to the user for interacting in the social media. The robot is paired to the user account of the user by generating a second user account for the robot in the social media and assigning, to the robot, a subset of the privileges associated with the user account. The privileges allow the robot to access the social interactions available in the user account and to generate social interactions on behalf of the user, which are then posted to the social media for the user.
    Type: Application
    Filed: February 2, 2018
    Publication date: August 8, 2019
    Inventor: Javier Fernandez Rico
  • Publication number: 20190099681
    Abstract: Methods and systems are provided for providing real world assistance by a robot utility and interface device (RUID) are provided. A method provides for identifying a position of a user in a physical environment and a surface within the physical environment for projecting an interactive interface. The method also provides for moving to a location within the physical environment based on the position of the user and the surface for projecting the interactive interface. Moreover, the method provides for capturing a plurality of images of the interactive interface while the interactive interface is being interacted with by the use and for determining a selection of an input option made by the user.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Javier Fernandez Rico, Erik Beran, Michael Taylor, Ruxin Chen
  • Publication number: 20190102667
    Abstract: An autonomous personal companion utilizing a method of object identification that relies on a hierarchy of object classifiers for categorizing one or more objects in a scene. The classifier hierarchy is composed of a set of root classifiers trained to recognize objects based on separate generic classes. Each root acts as the parent of a tree of child nodes, where each child node contains a more specific variant of its parent object classifier. The method covers walking the tree in order to classify an object based on more and more specific object features. The system is further comprised of an algorithm designed to minimize the number of object comparisons while allowing the system to concurrently categorize multiple objects in a scene.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Sergey Bashkirov, Michael Taylor, Javier Fernandez-Rico