Patents by Inventor Tan Xu

Tan Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230179942
    Abstract: The technology described herein is generally directed towards the use of spatial audio to provide directional cues to assist a user in looking in a desired direction in a user perceived environment, which can be a real-world or a virtual environment. Location data for a user in three-dimensional space can be obtained. A direction of view of the user within the user-perceived environment is determined. Spatial audio can be output that is perceived as coming from a position within the user-perceived environment, such as to provide directional cues directed to changing the direction of view of the user to a different view. The spatial audio can provide prompts to the user to adjust his or her vision towards the desired direction, for example. The user's location and direction of view can be mapped to audio that is relevant to the user's direction of view when at that location.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179814
    Abstract: The technology described herein is generally directed towards allowing a user to switch among media streams of live events, including generally concurrent events, and among different media streams that are available for the same event. The media streams can be virtual reality streams, video streams and audio streams, and can be from different viewing and/or listening perspectives and quality. The user can see (and hear) previews of a stream, and see metadata associated with each available stream, such as viewer rating, cost to use, video quality data, and a count of current viewers. Via a social media service, a user can sees whether any friends are also viewing. An event that is not streamed in virtual reality can be viewed via a virtual reality device by presenting the event via a virtual element, such as a virtual television set, within the virtual environment.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230172534
    Abstract: The technology described herein is generally directed towards collecting digital twin datasets from individual users to represent their individual physical, emotional, chemical and/or environmental conditions. A user's digital twin is matched to one or more other digital twins with similar physical, emotional, chemical, and/or environmental conditions. The matched digital twins can share data and learnings via a virtual anonymous relationship. Multiple digital twins that represent users with similar conditions may be found and treated collectively as a group; a user via his or her digital twin can poll the group to receive and process responses from each respondent. A digital twin can be a therapist or a researcher who emulates a patient and uses the emulated digital twin as a proxy to monitor and process the results of other digital twins.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230177258
    Abstract: The disclosed technology is directed towards determining a sub-content element within electronically presented media content based on where a user is currently gazing within the content presentation. The sub-content elements of the book have been previously mapped to their respective page and coordinates on the page, for example. As a more particular example, a user can be gazing at a certain paragraph on a displayed page of an electronic book, and that information can be detected and used to associate an annotation with that paragraph for personal output and/or sharing with others. A user can input the annotation data in various ways, including verbally for speech recognition or for audio replay.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230177416
    Abstract: Disclosed is managing (creating and maintaining) attendance/participation data describing remote attendance of attendees of an event, or a replay of the event. The event can be a virtual reality event. When a remote user attends an event, subsequent viewers of the event replay can see a digital representation (e.g., an avatar) of the remote user within the replay as having attended the event in-person. Subsequent replays include digital representations of the remote user and the user(s) that viewed previously replay(s) to emulate their attendance. User can manage their own attendance data, including to obtain proof of attendance. A user can delete his or her presence at an event, such that replays after the deletion do not include that user's representation. A user can go anonymous with respect to an event, such that any replays after the anonymity choice include only a generic representation of the attendee.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179673
    Abstract: The disclosed technology is directed towards determining that information of interest, corresponding to as a meaningful event, is available to be captured and saved, and capturing the information. When an event is determined to satisfy a defined meaningful event likelihood criterion, sensor data (which can include media data), time data and location data are collected and associated with the meaningful event, e.g., in a data store. A presentation/package is generated from the various data, and maintained for subsequent access, e.g., for sending to a recipient. The presentation can include annotation data. The presentation can be conditionally marked for future presentation if and when certain condition data is satisfied. The presentation can be associated with at least one conditional gift.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179840
    Abstract: The disclosed technology is directed towards collecting data, including multimedia content, for creating a multimedia presentation associated with an event being experienced by a user. Location data of a user's mobile device can be used to determine availability of sensor(s) within a proximity of the mobile device, along with one or more multimedia sensor of the mobile device. The user can select a source group of the available sensors to obtain data from them, including multimedia content such as camera video and microphone audio feeds. The sensor data received from the selected sensor source group is used, optionally along with previously recorded multimedia content, to create and output the multimedia presentation associated with the event. The user can add annotations including user input and the previously recorded multimedia content for inclusion in the multimedia presentation.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Patent number: 11671575
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: June 6, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo
  • Patent number: 11665309
    Abstract: A processing system having at least one processor may establish a communication session between a first communication system of a first user and a second communication system of a second user, the communication session including first video content of a first physical environment of the first user and second video content of a second physical environment of the second user, determine a first visualization action for a first physical object in the first physical environment in accordance with a first configuration setting of the first user for the communication session, obtain the first video content from a first camera of the first communication system, detect the first physical object in the first video content, and perform the first visualization action to modify the first video content. The processing system may then transmit first visualization information including the first video content that is modified to the second communication system.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: May 30, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Zhu Liu, David Crawford Gibbon, Bernard S. Renger, Behzad Shahraray, Tan Xu
  • Patent number: 11663725
    Abstract: One example of a method includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a target that is present in the scene, wherein the target is identified based on a determination of a likelihood of being of interest to a viewer of the scene, determining a trajectory of the target through the plurality of video streams, wherein the determining is based in part on an automated visual analysis of the plurality of video streams, wherein the determining is based in part on a visual analysis of the plurality of video streams, rendering a volumetric video traversal that follows the target through the scene, wherein the rendering comprises compositing the plurality of video streams, receiving viewer feedback regarding the volumetric video traversal, and adjusting the rendering in response to the viewer feedback.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: May 30, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Tan Xu, Lee Begeja, Bernard S. Renger, Behzad Shahraray, Raghuraman Gopalan, Eric Zavesky
  • Patent number: 11654372
    Abstract: Aspects of the subject disclosure may include, for example, obtaining portions of video content from a video game from video game server(s) associated with a video game provider, selecting a first portion of video content from the portions of the video content, and providing the first portion to device(s) associated with viewer(s). Each device presents the first portion of the video content. Further embodiments include obtaining popularity information from the device(s) according to feedback based on presenting the first portion of the video content to the device(s), determining that the popularity information satisfies a popularity threshold associated with the video content, determining a subject matter corresponding to the first portion of the video content, and identifying a second portion of the video content from the video game to be recorded according to the subject matter. Other embodiments are disclosed.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: May 23, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Jean-Francois Paiement, Lee Begeja, Jianxiong Dong, Tan Xu, Eric Zavesky
  • Patent number: 11651546
    Abstract: Aspects of the subject disclosure may include, for example, predicting a field of view of a viewer to obtain a predicted field of view based on information about the viewer and a scoring of a point of interest in media content. A line of sight is obtained between the viewer and a presentation of the media content to obtain a viewer line of sight, and the scoring of the point of interest in the media content is updated to obtain an updated scoring based on the viewer line of sight, the predicted field of view being updated according to the updated scoring. Other embodiments are disclosed.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: May 16, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Tan Xu, Eric Zavesky, Behzad Shahraray, David Crawford Gibbon
  • Publication number: 20230128178
    Abstract: A method may include receiving current environment condition information associated with an extended reality device; receiving historical environment condition information associated with the extended reality device; based on current environment condition information and the historical environment condition information, determining one or more adjustments to meet a performance threshold for rendering objects on the an extended reality device or using the an extended reality device; and sending instructions to implement the one or more adjustments to meet the performance threshold for rendering objects on the extended reality device or using the extended reality device.
    Type: Application
    Filed: October 21, 2021
    Publication date: April 27, 2023
    Inventors: Eric Zavesky, Wen-Ling Hsu, Tan Xu
  • Publication number: 20230120772
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first group of volumetric content, generating first metadata for the first group of volumetric content, and storing the first group of volumetric content. Further embodiments include obtaining a second group of volumetric content, generating second metadata for the second group of volumetric content, and storing the second group of volumetric content.
    Type: Application
    Filed: October 14, 2021
    Publication date: April 20, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, David Crawford Gibbon, Tan Xu, Wen-Ling Hsu, Richard Palazzo
  • Publication number: 20230113222
    Abstract: A method for automatic selection of viewpoint characteristics and trajectories in volumetric video presentations includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a set of desired viewpoint characteristics for a volumetric video traversal of the scene, determining a trajectory through the plurality of video streams that is consistent with the set of desired viewpoint characteristics, rendering a volumetric video traversal that follows the trajectory, wherein the rendering comprises compositing the plurality of video streams, and publishing the volumetric video traversal for viewing on a user endpoint device.
    Type: Application
    Filed: October 10, 2022
    Publication date: April 13, 2023
    Inventors: David Crawford Gibbon, Tan Xu, Zhu Liu, Behzad Shahraray, Eric Zavesky
  • Patent number: 11605402
    Abstract: Methods, computer-readable media, and apparatuses for composing a video in accordance with a user goal and an audience preference are described. For example, a processing system having at least one processor may obtain a plurality of video clips of a user, determine at least one goal of the user for a production of a video from the plurality of video clips, determine at least one audience preference of an audience, and compose the video comprising at least one video clip of the plurality of video clips of the user in accordance with the at least one goal of the user and the at least one audience preference. The processing system may then upload the video to a network-based publishing platform.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: March 14, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Tan Xu, Behzad Shahraray, Eric Zavesky, Lee Begeja, Paul Triantafyllou, Zhu Liu, Bernard S. Renger
  • Publication number: 20230070050
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.
    Type: Application
    Filed: September 9, 2021
    Publication date: March 9, 2023
    Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo
  • Publication number: 20230063510
    Abstract: A method for streaming a 360 degree video over a communications network, wherein the video is streamed in a plurality of chunks, includes selecting a prediction window during which to predict a field of view within the video, the field of view is expected to be visible by a viewer at a time of playback of a next chunk of the video, wherein a duration of the prediction window is based on at least one condition within the communications network, selecting a machine learning algorithm to predict the field of view based on a head movement of the viewer, wherein the machine learning algorithm is selected based on the duration of the prediction window, predicting the field of view based on the head movement of the viewer and the machine learning algorithm, identifying a tile of the next chunk that corresponds to the field of view, and downloading the tile.
    Type: Application
    Filed: October 10, 2022
    Publication date: March 2, 2023
    Inventors: Bo Han, Vijay Gopalakrishnan, Tan Xu
  • Publication number: 20230063988
    Abstract: A processing system including at least one processor may capture data from a sensor comprising a microphone of a wearable device, the data comprising external audio data captured via the microphone, determine first audio data a first audio source in the external audio data, apply the first audio data to a situational detection model, and detect a first situation via the first situational detection model. The processing system may then modify, in response to detecting the first situation via the first situational detection model, the external audio data via a change to the first audio data in the external audio data to generate a modified audio data, in accordance with at least a first audio adjustment corresponding to the first situational detection model, where the modifying comprises increasing or decreasing a volume of the first audio data, and present the modified audio data via an earphone of the wearable device.
    Type: Application
    Filed: October 10, 2022
    Publication date: March 2, 2023
    Inventors: Jean-Francois Paiement, David Crawford Gibbon, Tan Xu, Eric Zavesky
  • Patent number: 11593725
    Abstract: A method for gaze-based workflow adaptation includes identifying a workflow in which a user is engaged, where the workflow comprises a plurality of tasks that collectively achieves a desired goal, creating a trigger for a given task of the plurality of tasks, wherein the trigger specifies an action to be automatically taken in response to a gaze of the user meeting a defined criterion, monitoring a progress of the workflow, monitoring the gaze of the user, and sending a signal to a remote device in response to the gaze of the user meeting the defined criterion, wherein the signal instructs the remote device to take the action.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: February 28, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Richard Palazzo, Lee Begeja, David Crawford Gibbon, Zhu Liu, Tan Xu, Eric Zavesky