Abstract: Aspects of systems and methods for collaborating physical-virtual interfaces are disclosed. In an example, a method may include transmitting digital content to display in a virtual environment that includes one or more human inhabited characters, the digital content corresponding to content displayed on one or more interactive devices. The method may also include receiving first input data representing content markup of the digital content from a first user of the first interactive device. The method may also include determining an action state or an appearance state for a human inhabited character in response to the first data. The method may also include transmitting the first input data to display in the virtual environment and transmitting the human inhabited character to display in the virtual environment according to the action state or the appearance state.
Abstract: Systems and methods for automated control of human inhabited characters. In an example, control of human inhabited character may be achieved via a plurality of input devices, including, but not limited to, a microphone, a camera, or a hand-held controller, that can modify and trigger changes in the appearance and/or the behavioral response of a character during the live interactions with humans. In an example, a computing device may include a neural network that receives the input from the microphone and/or the camera and changes the appearance and/or the behavioral response of the character according to the input. Further, input from the hand-held controller may be used to adjust a mood of the character or, in other words, emphasize or deemphasize the changes to the appearance and/or the behavioral response of the character.
Abstract: A peer to peer communication system and method are provided to enable interfacing with an application running on a gaming engine for an avatar simulation or video conference. The system and method establish a real-time peer-to-peer communication link between remotely located users for transmission in real-time of audio, video, and data communications. The system and method capture incoming audio and video transmissions from input devices operable by the users while controlling one or more avatars, and transmit, in real time, synchronized audio, video, and data communications to the users over the communication link.
Abstract: A rating interface system and method are provided that allow human users to continuously rate the impact they or other human users and/or their avatars are having on themselves or others during interpersonal interactions, such as conversations or group discussions. The system and method provide time stamping of users' ratings data and audio and video data of an interaction, and correlate the ratings data with the audio and video data at selected time intervals for subsequent analysis.
Type:
Grant
Filed:
February 19, 2019
Date of Patent:
November 1, 2022
Assignee:
MURSION, INC.
Inventors:
Arjun Nagendran, Scott Compton, William C. Follette
Abstract: A control system provides an interface for virtual characters, or avatars, during live avatar-human interactions. A human interactor can select facial expressions, poses, and behaviors of the virtual character using an input device mapped to menus on a display device.
Type:
Grant
Filed:
October 16, 2019
Date of Patent:
February 23, 2021
Assignee:
MURSION, INC.
Inventors:
Alex Zelenin, Brian D. Kelly, Arjun Nagendran
Abstract: A control system provides an interface for virtual characters, or avatars, during live avatar-human interactions. A human interactor can select facial expressions, poses, and behaviors of the virtual character using an input device mapped to menus on a display device.
Type:
Grant
Filed:
November 7, 2016
Date of Patent:
November 26, 2019
Assignee:
MURSION, INC.
Inventors:
Alex Zelenin, Brian D. Kelly, Arjun Nagendran