Abstract: A light emission control apparatus, an operation device, a light emission control method, and a program are provided that enable richer expression of an execution status of a program based on light emission from the operation device. A particular-light-emitting-area identifying section identifies, on the basis of information corresponding to an execution status of a program, a particular light emitting area that is a part of a light emitting area set on an operation device, the part corresponding to the information. A light emission control section causes at least part of the light emitting area to enable a particular light emitting area to be recognized.
Abstract: A first correspondence table indicates a correspondence relation between logical blocks of a first file and physical blocks of a physical storage. A second correspondence table indicates a correspondence relation between logical blocks of a second file and the logical blocks of the first file. An access request receiving section receives an access request for the second file. A block conversion section refers to the second correspondence table, identifies a logical block of the first file associated with the logical block of the second file that is subject to the access request, and then refers to the first correspondence table to identify a physical block of the physical storage associated with the identified logical block of the first file. An accessing section accesses the identified physical block.
Abstract: A method of executing a multiplayer video game includes: connecting a game client to a game server via a communication network; for each player, executing an instance of a game on the game server to generate a game environment; synchronizing game data between multiple instances of the game to link the generated game environments; transmitting user inputs to the game server to initiate interactive gameplay; for each player, generating a video stream of the linked game environment from the player's perspective; and mixing the video streams for individual players into split screen view during a split screen mode.
Type:
Application
Filed:
May 22, 2023
Publication date:
November 30, 2023
Applicant:
Sony Interactive Entertainment Inc.
Inventors:
Christopher William Henderson, David Erwan Damien Uberti, Michael Richard Reynolds
Abstract: A wearable data processing apparatus includes one or more attachment members for attaching the wearable data processing apparatus to a part of a limb of a user, one or more sensors to generate user input data in response to one or more user inputs, wireless communication circuitry to transmit the user input data to an external device and to receive control data based on the user input data from the external device, processing circuitry to generate one or more output signals in dependence upon the control data and an output unit to output one or more of the output signals.
Type:
Application
Filed:
May 17, 2023
Publication date:
November 30, 2023
Applicant:
Sony Interactive Entertainment Inc.
Inventors:
Maria Chiara Monti, Matthew Sanders, Pedro Federico Quijada Leyton
Abstract: Proposed is an information processing apparatus including a control unit that controls display of a virtual space, in which the control unit performs control to acquire communication information of one or more other users in another virtual space and present the acquired communication information by a virtual object disposed in the virtual space.
Type:
Application
Filed:
October 18, 2021
Publication date:
November 30, 2023
Applicants:
SONY GROUP CORPORATION, SONY INTERACTIVE ENTERTAINMENT INC.
Abstract: In sequence level prediction of a sequence of frames of high dimensional data one or more affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated affective labels.
Abstract: An information processing apparatus acquires, using an image obtained by capturing a device including a light-emitting marker with exposure times shorter than a time corresponding to one frame, information regarding a position and a posture of the device. The light-emitting marker is caused to emit light for light emission times equal to or shorter than the exposure times. The information processing apparatus causes the light-emitting marker to emit light in a predetermined flashing pattern, and identifies the exposure times on a time axis of the device on the basis of whether or not the image appears in the captured image, to thereby synchronize the exposure and the light emission.
Type:
Grant
Filed:
November 22, 2019
Date of Patent:
November 28, 2023
Assignees:
Sony Interactive Entertainment Inc., Sony Interactive Entertainment Europe LTD.
Abstract: A head-mounted display includes a main body having a housing defining at least a portion of an exterior of the head mounted display; a built-in display disposed in the housing; a wearing band extending from the main body to a rear side and having a shape enclosing a head of a user as a whole; a right-side extending section configured to make up a right-side part of the wearing band; a left-side extending section configured to make up a left-side part of the wearing band; and a frame which is separate from the housing and constitutes at least a portion of a rear part of the main body, where the frame has left and right openings for receiving left and right lenses, respectively, and a recessed section for accommodating a user's nose when the head-mounted display is worn.
Abstract: An audio converter system is provided. The system comprises an audio input configured to receive a source audio, an audio output configured to couple to a hybrid speaker comprising at least two nondirectional speakers and a directional speaker, and a processor configured to generate an output audio for the hybrid speaker based on the source audio by: identifying a specific sound in the source audio, isolating the specific sound from the source audio, generating a directional speaker output for the directional speaker of the hybrid speaker based on the specific sound, and generating at least two channels of nondirectional speaker output for the at least two nondirectional speakers of the hybrid speaker.
Abstract: Methods for providing guidance to a user wearing a head mounted display (HMD) are provided. One example method includes using a camera of the HMD to track the user wearing the HMD in a real-world space, and identifying a safe zone within the real-world space for interacting with a virtual reality space via the HMD. The method further includes detecting movements of the user in the real-world space. The method includes integrating content into the virtual reality space. The content is configured to provide guidance in direction of movement of the user toward the safe zone of the real-world space. If the user continues to move away from safe zone, pausing presentation of the virtual reality space, and resuming the presentation when the user is in the safe zone.
Type:
Grant
Filed:
April 14, 2020
Date of Patent:
November 28, 2023
Assignee:
Sony Interactive Entertainment Inc.
Inventors:
Glenn T. Black, Michael G. Taylor, Todd Tokubo
Abstract: An information processing apparatus acquires a first image obtained by imaging a real space. The information processing apparatus generates a display image by merging the first image into a second image representing a virtual space to be presented to a user wearing a head-mounted display (HMD), the display image being configured such that the first image has an inconspicuous border. The information processing apparatus displays the generated display image on the HMD.
Abstract: A server may determine a client device type from an identifier and generate a user interface configuration profile for the client device using the client device type. The client device can use the profile to configure a first user interface to provide inputs for a software title emulated by the server. The software title is configured for use with a device having a second user interface. The server sends the profile to the client device and emulates the software title with inputs received from the client device. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
Abstract: A method of cloud gaming is disclosed. The method including receiving an encoded video frame at a client, wherein a server executes an application to generate a rendered video frame which is then encoded at an encoder at the server as the encoded video frame, wherein the encoded video frame includes one or more encoded slices that are compressed. The method including decoding the one or more encoded slices at a decoder of the client to generate one or more decoded slices. The method including rendering the one or more decoded slices for display at the client. The method including begin displaying the one or more decoded slices that are rendered before fully receiving the one or more encoded slices at the client.
Abstract: Methods and systems are provided for processing operations of a game to be played via a mobile device is provided. The method includes establishing a connection with the mobile device to play the game. The method includes receiving controller input from the mobile device; the controller input being from the mobile device to perform an action in the game. The method includes determining a correction value required to perform the action. The method includes applying the correction value to the controller input to increase an accuracy of the controller input when performing the action. In this way, when correction values are applied to the controller input from the mobile device of a user playing a game, the accuracy of the control input is increased so that the user can achieve the intended action in the game.
Type:
Grant
Filed:
October 5, 2021
Date of Patent:
November 28, 2023
Assignee:
Sony Interactive Entertainment Inc.
Inventors:
Warren M. Benedetto, Alvin Daniel, Daniel Hiatt, Jon Webb
Abstract: A game selection system includes an emotion processor configured to obtain a current emotional state of a user; a descriptor processor configured to obtain one or more emotion descriptors associated with one or more games; an evaluation processor configured to predict an emotion outcome for the user for the or each game, based upon the user's current emotional state and the one or more emotion descriptors of the respective games; and a selection processor configured to select one or more games in response to whether their respective emotion outcomes meet at least a first predetermined criterion.
Type:
Application
Filed:
May 15, 2023
Publication date:
November 23, 2023
Applicant:
Sony Interactive Entertainment Inc.
Inventors:
Philip Cockram, Christopher William Henderson, Michael Eder, Daniele Bernabei
Abstract: A data processing apparatus includes: an input unit configured to receive data corresponding to at least part of a mesh, where the at least part of the mesh includes a plurality of vertices, where each vertex corresponds to a location within a virtual space, and a plurality of polygons, where each polygon includes a perimeter comprising three or more lines, where each of the three or more lines intersects two of the plurality of vertices; a generating unit configured to, in a first phase, generate, based on the received data, two or more seed points, where each seed point corresponds to a location within the virtual space, and in a second phase, where the second phase is different from the first phase, generate two or more meshlets, where each meshlet includes a subset of the at least part of the mesh, where each meshlet is generated in dependence upon the location of a respective one of the generated seed points; and an output unit configured to output data corresponding to one or more of the generated meshl
Type:
Application
Filed:
May 15, 2023
Publication date:
November 23, 2023
Applicant:
Sony Interactive Entertainment Inc.
Inventors:
Sahin Serdar Kocdemir, Daniel Goldman, Anthony William Dann
Abstract: An aroma chemical presentation apparatus includes an encapsulation body supporting unit that supports an aroma chemical encapsulation body in which an aroma chemical material is encapsulated, an action body that is brought into contact with the aroma chemical encapsulation body to cause the aroma chemical material in the aroma chemical encapsulation body to be emitted, and a derivation fan that forms an air flow in a predetermined direction for deriving the emitted aroma chemical material.
Abstract: Provided is a gas sensor device including a plurality of sensitive members and a measuring instrument. The plurality of sensitive members have respective sensitive materials that react to molecules present in the air and targeted for measurement. The measuring instrument independently measures the respective reactions of the plurality of sensitive members to the molecules.
Abstract: A DC/DC converter includes N inductors and N power modules which correspond to N phases. The N inductors each include a plurality of inductor chips that are electrically connected in parallel to each other. The plurality of inductor chips are mounted separately on a main mounting surface and a sub-mounting surface of a printed circuit board. The sub-mounting surface is opposite to the main mounting surface.
Type:
Grant
Filed:
October 18, 2019
Date of Patent:
November 21, 2023
Assignee:
Sony Interactive Entertainment Inc.
Inventors:
Kazuki Sasao, Masanori Hayashibara, Hideki Ito
Abstract: An encoding apparatus is provided. The apparatus comprises an input unit operable to obtain a plurality of training images, said training images being for use in training a machine learning model. The apparatus also comprises a label unit operable to obtain a class label associated with the training images; and a key unit operable to obtain a secret key for use in encoding the training images. The apparatus further comprises an image noise generator operable to generate, based on the obtained secret key, noise for introducing into the training images. The image noise generator is configured to generate noise that correlates with the class label associated with the training images such that a machine learning model subsequently trained with the modified training images learns to associate the introduced noise with the class label for those images. A corresponding decoding apparatus is also provided.
Type:
Grant
Filed:
October 20, 2020
Date of Patent:
November 21, 2023
Assignee:
Sony Interactive Entertainment Inc.
Inventors:
Mark Jacobus Breugelmans, Oliver Hume, Fabio Cappello, Nigel John Williams