Patents by Inventor Ruxin Chen

Ruxin Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10376785
    Abstract: Consumer electronic devices have been developed with enormous information processing capabilities, high quality audio and video outputs, large amounts of memory, and may also include wired and/or wireless networking capabilities. Additionally, relatively unsophisticated and inexpensive sensors, such as microphones, video camera, GPS or other position sensors, when coupled with devices having these enhanced capabilities, can be used to detect subtle features about users and their environments. A variety of audio, video, simulation and user interface paradigms have been developed to utilize the enhanced capabilities of these devices. These paradigms can be used separately or together in any combination. One paradigm automatically creating user identities using speaker identification. Another paradigm includes a control button with 3-axis pressure sensitivity for use with game controllers and other input devices.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: August 13, 2019
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Gustavo Hernandez-Abrego, Xavier Menendez-Pidal, Steven Osman, Ruxin Chen, Rishi Deshpande, Care Michaud-Wideman, Richard Marks, Eric J. Larsen, Xiaodong Mao
  • Publication number: 20190163977
    Abstract: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. An environment state for each time step t corresponding to each frame is represented by the video information for time step t and predicted affective information from a previous time step t?1. An action A(t) as taken with an agent controlled by a machine learning algorithm for the frame at step t, wherein an output of the action A(t) represents affective label prediction for the frame at the time step t. A pool of predicted actions is transformed to a predicted affective history at a next time step t+1. The predictive affective history is included as part of the environment state for the next time step t+1. A reward R is generated on predicted actions up to the current time step t, by comparing them against corresponding annotated movie scene affective labels.
    Type: Application
    Filed: October 25, 2018
    Publication date: May 30, 2019
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li
  • Patent number: 10268438
    Abstract: Method for providing image of HMD user to a non-HMD user includes, receiving a first image of a user including the user's facial features captured by an external camera when the user is not wearing a head mounted display (HMD). A second image capturing a portion of the facial features of the user when the user is wearing the HMD is received. An image overlay data is generated by mapping contours of facial features captured in the second image with contours of corresponding facial features captured in the first image. The image overlay data is forwarded to the HMD for rendering on a second display screen that is mounted on a front face of the HMD.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: April 23, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Ruxin Chen
  • Publication number: 20190099681
    Abstract: Methods and systems are provided for providing real world assistance by a robot utility and interface device (RUID) are provided. A method provides for identifying a position of a user in a physical environment and a surface within the physical environment for projecting an interactive interface. The method also provides for moving to a location within the physical environment based on the position of the user and the surface for projecting the interactive interface. Moreover, the method provides for capturing a plurality of images of the interactive interface while the interactive interface is being interacted with by the use and for determining a selection of an input option made by the user.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Javier Fernandez Rico, Erik Beran, Michael Taylor, Ruxin Chen
  • Publication number: 20190065960
    Abstract: An autonomous personal companion executing a method including capturing data related to user behavior. Patterns of user behavior are identified in the data and classified using predefined patterns associated with corresponding predefined tags to generate a collected set of one or more tags. The collected set is compared to sets of predefined tags of a plurality of scenarios, each to one or more predefined patterns of user behavior and a corresponding set of predefined tags. A weight is assigned to each of the sets of predefined tags, wherein each weight defines a corresponding match quality between the collected set of tags and a corresponding set of predefined tags. The sets of predefined tags are sorted by weight in descending order. A matched scenario is selected for the collected set of tags that is associated with a matched set of predefined tags having a corresponding weight having the highest match quality.
    Type: Application
    Filed: August 23, 2017
    Publication date: February 28, 2019
    Inventors: Michael Taylor, Javier Fernandez-Rico, Sergey Bashkirov, Jaekwon Yoo, Ruxin Chen
  • Patent number: 10210905
    Abstract: A flight path management system manages flight paths for an unmanned aerial vehicle (UAV). The flight path management system receives a sequence of controller inputs for the UAV, and stores the sequence of controller inputs in a memory. The flight path management system accesses the memory and selects a selected section of the sequence of controller inputs corresponding to a time period. The flight management system outputs the selected section to a playback device in real time over a length of the time period.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: February 19, 2019
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Dennis Dale Castleman, Ruxin Chen, Frank Zhao, Glenn Black
  • Patent number: 10191541
    Abstract: Methods, devices, and computer programs for augmenting a virtual reality scene with real world content are provided. One example method includes an operation for obtaining sensor data from an HMD of a user to determine that a criteria is met to overlay one or more real world objects into the virtual reality scene to provide an augmented virtual reality scene. In certain examples, the criteria corresponds to predetermined indicators suggestive of disorientation of a user when wearing the HMD and being presented a virtual reality scene. In certain other examples, the one or more real world objects are selected based on their effectiveness at reorienting a disoriented user.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: January 29, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Ruxin Chen
  • Publication number: 20190013015
    Abstract: A method for improved initialization of speech recognition system comprises mapping a trained hidden markov model based recognition node network (HMM) to a Connectionist Temporal Classification (CTC) based node label scheme. The central state of each frame in the HMM are mapped to CTC-labeled output nodes and the non-central states of each frame are mapped to CTC-blank nodes to generate a CTC-labeled HMM and each central state represents a phoneme from human speech detected and extracted by a computing device. Next the CTC-labeled HMM is trained using a cost function, wherein the cost function is not part of a CTC cost function. Finally the CTC-labeled HMM is trained using a CTC cost function to produce a CTC node network. The CTC node network may be iteratively trained by repeating the initialization steps.
    Type: Application
    Filed: July 10, 2017
    Publication date: January 10, 2019
    Inventors: Xavier Menendez-Pidal, Ruxin Chen
  • Patent number: 10127927
    Abstract: A method for emotion or speaking style recognition and/or clustering comprises receiving one or more speech samples, generating a set of training data by extracting one or more acoustic features from every frame of the one or more speech samples, and generating a model from the set of training data, wherein the model identifies emotion or speaking style dependent information in the set of training data. The method may further comprise receiving one or more test speech samples, generating a set of test data by extracting one or more acoustic features from every frame of the one or more test speeches, and transforming the set of test data using the model to better represent emotion/speaking style dependent information, and use the transformed data for clustering and/or classification to discover speech with similar emotion or speaking style.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: November 13, 2018
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Ozlem Kalinli-Akbacak, Ruxin Chen
  • Patent number: 10076705
    Abstract: A system and method for conditioning execution of a control function on a determination of whether or not a person's attention is directed toward a predetermined device. The method involves acquiring data concerning the activity of a person who is in the proximity of the device, the data being in the form of one or more temporal samples. One or more of the temporal samples is then analyzed to determine if the person's activity during the time of the analyzed samples indicates that the person's attention is not directed toward the device. The results of the determination are used to ascertain whether or not the control function should be performed.
    Type: Grant
    Filed: January 7, 2014
    Date of Patent: September 18, 2018
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Hrishikesh R. Deshpande, Ruxin Chen
  • Publication number: 20180093768
    Abstract: The present disclosure is related to unmanned aerial vehicles or drones that have a capability of quickly swapping batteries. This may be accomplished even as the drone continues to fly. A drone consistent with the present disclosure may drop one battery and pickup another using an attachment mechanism. Attachment mechanisms of the present disclosure may include electro-magnets, mechanical actuators, pins, or hooks. Systems consistent with the present disclosure may also include locations where replacement batteries may be provided to aircraft via actuation devices coupled to a physical location.
    Type: Application
    Filed: December 29, 2016
    Publication date: April 5, 2018
    Inventors: Dennis Castleman, Ruxin Chen, Frank Zhao, Glenn Black
  • Publication number: 20180095463
    Abstract: A flight path management system manages flight paths for an unmanned aerial vehicle (UAV). The flight path management system receives a sequence of controller inputs for the UAV, and stores the sequence of controller inputs in a memory. The flight path management system accesses the memory and selects a selected section of the sequence of controller inputs corresponding to a time period. The flight management system outputs the selected section to a playback device in real time over a length of the time period.
    Type: Application
    Filed: December 29, 2016
    Publication date: April 5, 2018
    Inventors: Dennis Dale Castleman, Ruxin Chen, Frank Zhao, Glenn Black
  • Publication number: 20180004478
    Abstract: Method for providing image of HMD user to a non-HMD user includes, receiving a first image of a user including the user's facial features captured by an external camera when the user is not wearing a head mounted display (HMD). A second image capturing a portion of the facial features of the user when the user is wearing the HMD is received. An image overlay data is generated by mapping contours of facial features captured in the second image with contours of corresponding facial features captured in the first image. The image overlay data is forwarded to the HMD for rendering on a second display screen that is mounted on a front face of the HMD.
    Type: Application
    Filed: June 23, 2017
    Publication date: January 4, 2018
    Inventor: Ruxin Chen
  • Publication number: 20180005429
    Abstract: Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
    Type: Application
    Filed: January 11, 2017
    Publication date: January 4, 2018
    Inventors: Steven Osman, Javier Fernandez Rico, Ruxin Chen
  • Publication number: 20180004286
    Abstract: Methods, devices, and computer programs for augmenting a virtual reality scene with real world content are provided. One example method includes an operation for obtaining sensor data from an HMD of a user to determine that a criteria is met to overlay one or more real world objects into the virtual reality scene to provide an augmented virtual reality scene. In certain examples, the criteria corresponds to predetermined indicators suggestive of disorientation of a user when wearing the HMD and being presented a virtual reality scene. In certain other examples, the one or more real world objects are selected based on their effectiveness at reorienting a disoriented user.
    Type: Application
    Filed: December 20, 2016
    Publication date: January 4, 2018
    Inventor: Ruxin Chen
  • Patent number: 9575594
    Abstract: A virtual object can be controlled using one or more touch interfaces. A location for a first touch input can be determined on a first touch interface. A location for a second touch input can be determined on a second touch interface. A three-dimensional segment can be generated using the location of the first touch input, the location of the second touch input, and a pre-determined spatial relationship between the first touch interface and the second touch interface. The virtual object can be manipulated using the three-dimensional segment as a control input.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: February 21, 2017
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Ruxin Chen
  • Publication number: 20160310847
    Abstract: Consumer electronic devices have been developed with enormous information processing capabilities, high quality audio and video outputs, large amounts of memory, and may also include wired and/or wireless networking capabilities. Additionally, relatively unsophisticated and inexpensive sensors, such as microphones, video camera, GPS or other position sensors, when coupled with devices having these enhanced capabilities, can be used to detect subtle features about users and their environments. A variety of audio, video, simulation and user interface paradigms have been developed to utilize the enhanced capabilities of these devices. These paradigms can be used separately or together in any combination. One paradigm automatically creating user identities using speaker identification. Another paradigm includes a control button with 3-axis pressure sensitivity for use with game controllers and other input devices.
    Type: Application
    Filed: June 30, 2016
    Publication date: October 27, 2016
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Gustavo Hernandez-Abrego, Xavier Menendez-Pidal, Steven Osman, Ruxin Chen, Rishi Deshpande, Care Michaud-Wideman, Richard Marks, Eric J. Larsen, Xiaodong Mao
  • Publication number: 20160299624
    Abstract: A virtual object can be controlled using one or more touch interfaces. A location for a first touch input can be determined on a first touch interface. A location for a second touch input can be determined on a second touch interface. A three-dimensional segment can be generated using the location of the first touch input, the location of the second touch input, and a pre-determined spatial relationship between the first touch interface and the second touch interface. The virtual object can be manipulated using the three-dimensional segment as a control input.
    Type: Application
    Filed: June 21, 2016
    Publication date: October 13, 2016
    Inventor: Ruxin Chen
  • Patent number: 9405363
    Abstract: Consumer electronic devices have been developed with enormous information processing capabilities, high quality audio and video outputs, large amounts of memory, and may also include wired and/or wireless networking capabilities. Additionally, relatively unsophisticated and inexpensive sensors, such as microphones, video camera, GPS or other position sensors, when coupled with devices having these enhanced capabilities, can be used to detect subtle features about users and their environments. A variety of audio, video, simulation and user interface paradigms have been developed to utilize the enhanced capabilities of these devices. These paradigms can be used separately or together in any combination. One paradigm automatically creating user identities using speaker identification. Another paradigm includes a control button with 3-axis pressure sensitivity for use with game controllers and other input devices.
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: August 2, 2016
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC. (SIEI)
    Inventors: Gustavo Hernandez-Abrego, Xavier Menendez-Pidal, Steven Osman, Ruxin Chen, Rishi Deshpande, Care Michaud-Wideman, Richard Marks, Eric J. Larsen, Xiaodong Mao
  • Patent number: 9372624
    Abstract: A virtual object can be controlled using one or more touch interfaces. A location for a first touch input can be determined on a first touch interface. A location for a second touch input can be determined on a second touch interface. A three-dimensional segment can be generated using the location of the first touch input, the location of the second touch input, and a pre-determined spatial relationship between the first touch interface and the second touch interface. The virtual object can be manipulated using the three-dimensional segment as a control input.
    Type: Grant
    Filed: July 27, 2015
    Date of Patent: June 21, 2016
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Ruxin Chen