Patents by Inventor Sepideh Karimi
Sepideh Karimi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12296261Abstract: Eye tracking of the wearer of a virtual reality headset is used to customize/personalize VR video. Based on eye tracking, the VR scene may present different types of trees for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.Type: GrantFiled: October 11, 2022Date of Patent: May 13, 2025Assignee: Sony Interactive Entertainment Inc.Inventor: Sepideh Karimi
-
Patent number: 12274940Abstract: A method and system for providing gaze-based generation of virtual effects indicators correlated with directional sounds is disclosed. Gaze data is tracked via a camera associated with a client device to identify a point of focus within a three-dimensional virtual environment towards which one or both eyes of the player are focused. When the point of focus indicated by the gaze data when the point of focus does not move towards the source location within the three-dimensional virtual environment when the directional sound is received indicates that a virtual effect indicator associated with the directional sound type of the indicated directional sound is should be generated.Type: GrantFiled: April 17, 2024Date of Patent: April 15, 2025Assignees: Sony Interactive Entertainment LLC, Sony Interactive Entertainment Inc.Inventors: Kristie Ramirez, Elizabeth Juenger, Katie Egeland, Sepideh Karimi, Lachmin Singh, Olga Rudi
-
Patent number: 12274932Abstract: Methods and systems for processing audio for a user include identifying an interactive zone of the user as the user is interacting with a video game in a real-world environment. The real-world environment is monitored to detect any changes that can affect the interactive zone. Responsive to detecting changes, a volume of the audio directed to one side or both side of a headphone providing the audio to one or both ears of the user is dynamically adjusted. The adjustment to the audio is to prevent the user from getting distracted while interacting with the video game.Type: GrantFiled: May 27, 2022Date of Patent: April 15, 2025Assignees: Sony Interactive Entertainment LLC, Sony Interactive Entertainment Inc.Inventors: Victoria Dorn, Celeste Bean, Sepideh Karimi, Mahdi Azmandian
-
Publication number: 20250058222Abstract: Methods and systems for cooperative or coached gameplay in virtual environments are disclosed. Memory may store a content control profile regarding a set of control input associated with an action in a virtual environment of a digital content title. A request may be received from a set of one or more users associated with different source devices regarding cooperative gameplay of the digital content title. At least one virtual avatar may be generated for an interactive session of the digital content title in response to the request. A plurality of control inputs may be received from the plurality of different source devices and combined into a combination set of control inputs. Generating the combination set of control input may be based on the content control profile. Virtual actions associated with the virtual avatar may be controlled within the virtual environment in accordance with the combination set of control inputs.Type: ApplicationFiled: October 31, 2024Publication date: February 20, 2025Inventors: Olga Rudi, Kristine Young, Mahdi Azmandian, Jin Zhang, Sarah Karp, Sepideh Karimi, Elizabeth Juenger
-
Patent number: 12230288Abstract: Systems and methods for audio processing are described. An audio processing system receives audio content that includes a voice sample. The audio processing system analyzes the voice sample to identify a sound type in the voice sample. The sound type corresponds to pronunciation of at least one specified character in the voice sample. The audio processing system generates a filtered voice sample at least in part by filtering the voice sample to modify the sound type. The audio processing system outputs the filtered voice sample.Type: GrantFiled: May 31, 2022Date of Patent: February 18, 2025Assignees: SONY INTERACTIVE ENTERTAINMENT LLC, SONY INTERACTIVE ENTERTAINMENT INC.Inventors: Jin Zhang, Celeste Bean, Sepideh Karimi, Sudha Krishnamurthy
-
Patent number: 12151167Abstract: Methods of the present disclosure may collect data when a user plays one or more different types of games when determinations are made as to whether the difficulty of a game should be changed. The collected data maybe evaluated to identify whether a user gaming performance level corresponds to an expected level of performance. When the user gaming performance level does not correspond to an expected level of performance, parameters that change the difficultly of the game may be changed automatically. Parameters that relate to movement speed, delay or hesitation, character strengths, numbers of competitors, or other metrics may be changed incrementally until a current user performance level corresponds to an expectation level of a particular user currently playing the game. At this time, the user expectation level may be changed, and the process may be repeated as skills of the user are developed over time.Type: GrantFiled: May 31, 2022Date of Patent: November 26, 2024Assignees: SONY INTERACTIVE ENTERTAINMENT LLC, SONY INTERACTIVE ENTERTAINMENT INC.Inventors: Victoria Dorn, Celeste Bean, Elizabeth Juenger, Kristie Ramirez, Sepideh Karimi, Mahdi Azmandian
-
Patent number: 12145065Abstract: Methods and systems for cooperative or coached gameplay in virtual environments are disclosed. Memory may store a content control profile regarding a set of control input associated with an action in a virtual environment of a digital content title. A request may be received from a set of one or more users associated with different source devices regarding cooperative gameplay of the digital content title. At least one virtual avatar may be generated for an interactive session of the digital content title in response to the request. A plurality of control inputs may be received from the plurality of different source devices and combined into a combination set of control inputs. Generating the combination set of control input may be based on the content control profile. Virtual actions associated with the virtual avatar may be controlled within the virtual environment in accordance with the combination set of control inputs.Type: GrantFiled: May 31, 2022Date of Patent: November 19, 2024Assignees: SONY INTERACTIVE ENTERTAINMENT LLC, SONY INTERACTIVE ENTERTAINMENT INC.Inventors: Olga Rudi, Kristine Young, Mahdi Azmandian, Jin Zhang, Sarah Karp, Sepideh Karimi, Elizabeth Juenger
-
Publication number: 20240269559Abstract: A method and system for providing gaze-based generation of virtual effects indicators correlated with directional sounds is disclosed. Gaze data is tracked via a camera associated with a client device to identify a point of focus within a three-dimensional virtual environment towards which one or both eyes of the player are focused. When the point of focus indicated by the gaze data when the point of focus does not move towards the source location within the three-dimensional virtual environment when the directional sound is received indicates that a virtual effect indicator associated with the directional sound type of the indicated directional sound is should be generated.Type: ApplicationFiled: April 17, 2024Publication date: August 15, 2024Inventors: Kristie Ramirez, Elizabeth Juenger, Katie Egeland, Sepideh Karimi, Lachmin Singh, Olga Rudi
-
Patent number: 11975264Abstract: A method and system for providing gaze-based generation of virtual effects indicators correlated with directional sounds is disclosed. Gaze data is tracked via a camera associated with a client device to identify a point of focus within a three-dimensional virtual environment towards which one or both eyes of the player are focused. When the point of focus indicated by the gaze data when the point of focus does not move towards the source location within the three-dimensional virtual environment when the directional sound is received indicates that a virtual effect indicator associated with the directional sound type of the indicated directional sound is should be generated.Type: GrantFiled: May 31, 2022Date of Patent: May 7, 2024Assignees: SONY INTERACTIVE ENTERTAINMENT LLC, SONY INTERACTIVE ENTERTAINMENT INC.Inventors: Kristie Ramirez, Elizabeth Juenger, Katie Egeland, Sepideh Karimi, Lachmin Singh, Olga Rudi
-
Publication number: 20240115942Abstract: Eye tracking of the wearer of a virtual reality headset is used to customize/personalize VR video. Based on eye tracking, the VR scene may present different types of trees for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.Type: ApplicationFiled: October 11, 2022Publication date: April 11, 2024Inventor: SEPIDEH KARIMI
-
Publication number: 20230410824Abstract: Systems and methods for audio processing are described. An audio processing system receives audio content that includes a voice sample. The audio processing system analyzes the voice sample to identify a sound type in the voice sample. The sound type corresponds to pronunciation of at least one specified character in the voice sample. The audio processing system generates a filtered voice sample at least in part by filtering the voice sample to modify the sound type. The audio processing system outputs the filtered voice sample.Type: ApplicationFiled: May 31, 2022Publication date: December 21, 2023Inventors: Jin Zhang, Celeste Bean, Sepideh Karimi, Sudha Krishnamurthy
-
Publication number: 20230405461Abstract: A method for providing assistance during gameplay is described. The method includes accessing a profile model associated with a user account of a user. The profile model is used to generate one or more predictive indicators based on a plurality of game contexts of one or more games. The method further includes receiving a request for accessing a game for a game session via the user account and generating assistance input for the user responsive to the one or more predictive indicators that the user will be unable to complete a task in the game. The task is associated to a game context. The assistance input is provided before the user performs the task.Type: ApplicationFiled: May 27, 2022Publication date: December 21, 2023Inventors: Victoria Dorn, Sarah Karp, Sepideh Karimi, Celeste Bean, Liz Juenger
-
Publication number: 20230398435Abstract: Methods and systems for processing audio for a user include identifying an interactive zone of the user as the user is interacting with a video game in a real-world environment. The real-world environment is monitored to detect any changes that can affect the interactive zone. Responsive to detecting changes, a volume of the audio directed to one side or both side of a headphone providing the audio to one or both ears of the user is dynamically adjusted. The adjustment to the audio is to prevent the user from getting distracted while interacting with the video game.Type: ApplicationFiled: May 27, 2022Publication date: December 14, 2023Inventors: Victoria Dorn, Celeste Bean, Sepideh Karimi, Mahdi Azmandian
-
Publication number: 20230381651Abstract: A method and system for providing gaze-based generation of virtual effects indicators correlated with directional sounds is disclosed. Gaze data is tracked via a camera associated with a client device to identify a point of focus within a three-dimensional virtual environment towards which one or both eyes of the player are focused. When the point of focus indicated by the gaze data when the point of focus does not move towards the source location within the three-dimensional virtual environment when the directional sound is received indicates that a virtual effect indicator associated with the directional sound type of the indicated directional sound is should be generated.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Inventors: Kristie Ramirez, Elizabeth Juenger, Katie Egeland, Sepideh Karimi, Lachmin Singh, Olga Rudi
-
Publication number: 20230381652Abstract: Methods and systems for cooperative or coached gameplay in virtual environments are disclosed. Memory may store a content control profile regarding a set of control input associated with an action in a virtual environment of a digital content title. A request may be received from a set of one or more users associated with different source devices regarding cooperative gameplay of the digital content title. At least one virtual avatar may be generated for an interactive session of the digital content title in response to the request. A plurality of control inputs may be received from the plurality of different source devices and combined into a combination set of control inputs. Generating the combination set of control input may be based on the content control profile. Virtual actions associated with the virtual avatar may be controlled within the virtual environment in accordance with the combination set of control inputs.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Inventors: Olga Rudi, Kristine Young, Mahdi Azmandian, Jin Zhang, Sarah Karp, Sepideh Karimi, Elizabeth Juenger
-
Publication number: 20230381662Abstract: Methods of the present disclosure may collect data when a user plays one or more different types of games when determinations are made as to whether the difficulty of a game should be changed. The collected data maybe evaluated to identify whether a user gaming performance level corresponds to an expected level of performance. When the user gaming performance level does not correspond to an expected level of performance, parameters that change the difficultly of the game may be changed automatically. Parameters that relate to movement speed, delay or hesitation, character strengths, numbers of competitors, or other metrics may be changed incrementally until a current user performance level corresponds to an expectation level of a particular user currently playing the game. At this time, the user expectation level may be changed, and the process may be repeated as skills of the user are developed over time.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Inventors: Victoria Dorn, Celeste Bean, Elizabeth Juenger, Kristie Ramirez, Sepideh Karimi, Mahdi Azmandian
-
Patent number: 11806630Abstract: Methods of the present disclosure may also identify when certain users perform well at certain activities and perform poorly when performing other activities. A particular user could perform very well at a swimming game yet perform poorly at the first-person shooter game. This performance difference may be based on a physical impairment, be based on a lack of training, or be based on other factors. When a potential performance deficiency is identified, a user may be provided with a set of selections that allow the user to participate in a training session or that may allow the user to change functions of a gaming controller to account for the apparent performance deficiency. This additional training or change in controller functions may allow a user to have a more enjoyable user experience or may allow the user to perform at a higher level.Type: GrantFiled: May 31, 2022Date of Patent: November 7, 2023Assignees: SONY INTERACTIVE ENTERTAINMENT LLC, SONY INTERACTIVE ENTERTAINMENT INC.Inventors: Victoria Dorn, Celeste Bean, Elizabeth Juenger, Kristie Ramirez, Sepideh Karimi, Mahdi Azmandian