Abstract: In a method for performing gaze redirection, a video stream of a user is received, each video frame from the video stream including a depiction of a gaze of the user. For each video frame from the video stream, an estimated gaze direction of the gaze of the user in that video frame is determined. A field of view (FOV) for a virtual representation associated with the user in a virtual environment is determined. An updated video frame including a modified gaze direction of the user different from the estimated gaze direction of the gaze of the user is generated.
Abstract: An attention-based audio adjustment method includes identifying, at a processor and at a first time, a first estimated gaze direction of a first participant within a virtual environment. First audio data is received at the processor from a compute device of the first participant. A second estimated gaze direction of the first participant within the virtual environment is determined by the processor at a second time. Second audio data, different from the first audio data and associated with a virtual representation of a second participant and/or a virtual object, is automatically generated by the processor, based on the first audio data and the second estimated gaze direction. A signal is sent from the processor to the compute device of the first participant, at a third time, to cause an adjustment to an audio output of the compute device of the first participant based on the second audio data.
Abstract: A non-immersive virtual reality (NIVR) method includes receiving sets of images of a first user and a second user, each image from the sets of images being an image of the associated user taken at a different angle from a set of angles. Video of the first user and the second user is received and processed. A first location and a first field of view are determined for a first virtual representation of the first user, and a second location and a second field of view are determined for a second virtual representation of the second user. Frames are generated for video planes of each of the first virtual representation of the first user and the second virtual representation of the second user based on the processed video, the sets of images, the first and second locations, and the first and second fields of view.
Abstract: In a method for performing gaze redirection, a video stream of a user is received, each video frame from the video stream including a depiction of a gaze of the user. For each video frame from the video stream, an estimated gaze direction of the gaze of the user in that video frame is determined. A field of view (FOV) for a virtual representation associated with the user in a virtual environment is determined. An updated video frame including a modified gaze direction of the user different from the estimated gaze direction of the gaze of the user is generated.
Abstract: A non-immersive virtual reality (NIVR) method includes receiving sets of images of a first user and a second user, each image from the sets of images being an image of the associated user taken at a different angle from a set of angles. Video of the first user and the second user is received and processed. A first location and a first field of view are determined for a first virtual representation of the first user, and a second location and a second field of view are determined for a second virtual representation of the second user. Frames are generated for video planes of each of the first virtual representation of the first user and the second virtual representation of the second user based on the processed video, the sets of images, the first and second locations, and the first and second fields of view.