Patents by Inventor Tan Xu

Tan Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954152
    Abstract: The present specification discloses video matching. In a computer-implemented method, a plurality of feature vectors of a target video is obtained. A candidate video similar to the target video is retrieved from a video database based on the plurality of feature vectors of the target video. A time domain similarity matrix feature map is constructed between the target video and the candidate video based on the target video and the candidate video. Using the time domain similarity matrix feature map as an input into a deep learning detection model, a video segment matching the target video in the candidate video and a corresponding similarity is output.
    Type: Grant
    Filed: January 3, 2023
    Date of Patent: April 9, 2024
    Assignee: Alipay (Hangzhou) Information Technology Co., Ltd.
    Inventors: Chen Jiang, Wei Zhang, Qing Wang, Yuan Cheng, Furong Xu, Kaiming Huang, Xiaobo Zhang, Feng Qian, Xudong Yang, Tan Pan
  • Patent number: 11890757
    Abstract: The present disclosure describes a device, computer-readable medium, and method for providing logistical support for robots. In one example, the method includes receiving, at a centralized support center that is in communication with a plurality of robots, a query from a first robot of the plurality of robots that has been deployed to perform a task, wherein the query indicates an error encountered by the first robot and evidence of the error collected by the first robot, formulating, at the centralized support center, a proposed solution to resolve the error, wherein the formulating comprises soliciting an analysis of the evidence by a party other than the first robot, and delivering, by the centralized support center, the proposed solution to the first robot.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: February 6, 2024
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION
    Inventors: Eric Zavesky, David Crawford Gibbon, Bernard S. Renger, Tan Xu
  • Publication number: 20230410159
    Abstract: Aspects of the subject disclosure may include, for example, obtaining contextual information associated with a user, wherein the user is engaged in an immersive environment using a target user device, and wherein the contextual information comprises user profile data, data regarding a location of the user, data regarding one or more inputs provided by the user, or a combination thereof, receiving data regarding a metaverse object in the immersive environment, determining a relevance of the metaverse object to the user based on the contextual information and the data regarding the metaverse object, responsive to the determining the relevance of the metaverse object to the user, generating a personalized recommendation or review of the metaverse object for the user, and causing the personalized recommendation or review to be provided to the user in the immersive environment for user consumption. Other embodiments are disclosed.
    Type: Application
    Filed: June 15, 2022
    Publication date: December 21, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Jean-Francois Paiement, Aritra Guha, Qiong Wu, Wen-Ling Hsu, Jianxiong Dong, Tan Xu
  • Publication number: 20230368811
    Abstract: In one example, a method performed by a processing system including at least one processor includes establishing a communication group including at least three users of an extended reality environment as members, tracking locations and directional positions of the members of the communication group within the extended reality environment and within physical environments of the members, determining that a second user of the at least three users is an intended recipient of a first utterance made by a first user of the at least three users, and presenting the first utterance to the second user, where a directionality associated with a presentation of the first utterance is based on a location and a directional position of the first user relative to the second user.
    Type: Application
    Filed: May 13, 2022
    Publication date: November 16, 2023
    Inventors: Eric Zavesky, Zhengyi Zhou, Mohammed Abdel-Wahab, Tan Xu, David Gibbon
  • Publication number: 20230353715
    Abstract: Aspects of the subject disclosure may include, for example, obtaining sensor data that includes an image of a projection environment, determining physical objects portrayed within the image, and characterizing physical properties of the physical objects according to the sensor data to obtain a characterization. A first target object of the physical objects having a first projection surface is identified according to the characterization, and a source image is modified according to the first projection surface. The modified image is provided to a projector adapted to project the modified image onto the first projection surface. Other embodiments are disclosed.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Brian Novack, Rashmi Palamadai, Tan Xu
  • Publication number: 20230343036
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a virtual item to be inserted into a target environment to create an extended reality environment, detecting conditions within the target environment, merging the virtual item and the target environment to create the extended reality environment, wherein the merging includes modifying at least one of: the virtual item or the target environment based on the conditions within the target environment, and presenting the extended reality environment to a user.
    Type: Application
    Filed: April 20, 2022
    Publication date: October 26, 2023
    Inventors: Tan Xu, Brian Novack, Eric Zavesky, Rashmi Palamadai
  • Publication number: 20230309164
    Abstract: In one example, a method includes instructing a mobile device of a user to establish a connection with an application server via a first network slice, where the first network slice is configured based on an initial resource need of a network connected application executing on the mobile device and an initial set of network conditions, sending, to the mobile device, a prediction of at least one network condition for the first network slice at a time in the future, receiving, from the mobile device, an indication of an updated resource need of the network connected application, and providing, to the mobile device, instructions for establishing a connection to the application server over a second network slice, where the second network slice is configured based on the updated resource need and a current set of network conditions.
    Type: Application
    Filed: March 23, 2022
    Publication date: September 28, 2023
    Inventors: Tan Xu, Rashmi Palamadai, Brian Novack, Eric Zavesky
  • Publication number: 20230298233
    Abstract: The present disclosure provides systems and methods for material decomposition. The systems may obtain scan projection data of a target object. The systems may determine corrected projection data by correcting, based on one or more pixel parameters, the scan projection data. The systems may also determine a reconstructed image by performing, based on the corrected projection data, image reconstruction. The systems may further determine density distribution images of at least two target materials of the target object by decomposing the reconstructed image.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 21, 2023
    Applicant: WUHAN UNITED IMAGING LIFE SCIENCE INSTRUMENT CO., LTD.
    Inventors: Tan XU, Jinglu MA, Wenting XU
  • Publication number: 20230244196
    Abstract: Aspects of the subject disclosure may include, for example, obtaining an image of an object over a communication network from a first communication device associated with a user, obtaining first information associated with the object, and receiving user-generated input over the communication network from the first communication device. The user-generated input indicates to generate a group of complementary objects associated with the object. Further embodiments can include, in response to receiving the user-generated input, providing instructions to generate the group of complementary objects over the communication network to a second communication device associated with the user based on the image of the object and the first information. The second communication device generates the group of complementary objects. Other embodiments are disclosed.
    Type: Application
    Filed: January 31, 2022
    Publication date: August 3, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Rashmi Palamadai, Brian Novack, Eric Zavesky, Tan Xu
  • Publication number: 20230229685
    Abstract: A device includes a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations of loading a user profile for a user consuming undigitized, static media content; identifying an area of interest in the undigitized, static media content; analyzing the area of interest; responsive to the user profile, creating immersive content to enhance the area of interest; and providing the immersive content for presentation to the user.
    Type: Application
    Filed: January 18, 2022
    Publication date: July 20, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Rashmi Palamadai, Eric Zavesky, Tan Xu, Brian Novack
  • Patent number: 11700407
    Abstract: The disclosed technology is directed towards inserting user-personalized or other user-related supplementary media content into primary media content being presented to the user. The personalized media content can be inserted into available insertion slots associated with the primary media content. The inserted content is based on the context of the primary media, e.g., a location or theme of a movie scene. For example, upon obtaining primary media content that is video, supplementary media content related to a group of frames of the primary media content can be determined. Supplementary media content is combined with the primary media content at a presentation position associated with the group of frames to output modified media content. For a video, for example, the supplementary content can be inserted between scenes, overlaid onto a scene, or presented proximate a scene.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: July 11, 2023
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230215472
    Abstract: Methods, computer-readable media, and apparatuses for composing a video in accordance with a user goal and an audience preference are described. For example, a processing system having at least one processor may obtain a plurality of video clips of a user, determine at least one goal of the user for a production of a video from the plurality of video clips, determine at least one audience preference of an audience, and compose the video comprising at least one video clip of the plurality of video clips of the user in accordance with the at least one goal of the user and the at least one audience preference. The processing system may then upload the video to a network-based publishing platform.
    Type: Application
    Filed: March 13, 2023
    Publication date: July 6, 2023
    Inventors: Tan Xu, Behzad Shahraray, Eric Zavesky, Lee Begeja, Paul Triantafyllou, Zhu Liu, Bernard S. Renger
  • Patent number: 11695914
    Abstract: Aspects of the subject disclosure may include, for example, transmitting viewpoint information associated with a first portion of a three-dimensional (3D)/volumetric video to a device, wherein the viewpoint information comprises a first coordinate in 3D space associated with a first viewing direction in a playback of the first portion and a first timestamp associated with the first portion, receiving, from the device, a multiplane image (MPI) representation of a second portion of the 3D video responsive to the transmitting of the viewpoint information, and providing an image of the MPI representation to a display device. Other embodiments are disclosed.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: July 4, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Tan Xu, Bo Han, Eric Zavesky
  • Publication number: 20230206146
    Abstract: A method for gaze-based workflow adaptation includes identifying a workflow in which a user is engaged, where the workflow comprises a plurality of tasks that collectively achieves a desired goal, creating a trigger for a given task of the plurality of tasks, wherein the trigger specifies an action to be automatically taken in response to a gaze of the user meeting a defined criterion, monitoring a progress of the workflow, monitoring the gaze of the user, and sending a signal to a remote device in response to the gaze of the user meeting the defined criterion, wherein the signal instructs the remote device to take the action.
    Type: Application
    Filed: February 27, 2023
    Publication date: June 29, 2023
    Inventors: Richard Palazzo, Lee Begeja, David Crawford Gibbon, Zhu Liu, Tan Xu, Eric Zavesky
  • Publication number: 20230209003
    Abstract: In one example, a method performed by a processing system including at least one processor includes identifying a background for a scene of video content, generating a three-dimensional model and visual effects for an object appearing in the background for the scene of video content, displaying a three-dimensional simulation of the background for the scene of video content, including the three-dimensional model and visual effects for the object, modifying the three-dimensional simulation of the background for the scene of video content based on user feedback, capturing video footage of a live action subject appearing together with the background for the scene of video content, where the live action subject appearing together with the background for the scene of video content creates the scene of video content, and saving the scene of video content.
    Type: Application
    Filed: December 28, 2021
    Publication date: June 29, 2023
    Inventors: Eric Zavesky, Tan Xu, Zhengyi Zhou
  • Publication number: 20230188716
    Abstract: In one example, a processing system including at least one processor may obtain a predicted viewport of a mobile computing device for an immersive visual stream, identify a first plurality of blocks of a frame of the immersive visual stream that are associated with the predicted viewport, encode the first plurality of blocks at a first encoding quality level, and encode a second plurality of blocks of the frame at a second encoding quality level, where the second encoding quality level is associated with a lesser visual quality as compared to the first encoding quality level and where the second plurality of blocks are outside of the predicted viewport. The processing system may then transmit the frame having the first plurality of blocks encoded at the first encoding quality level and the second plurality of blocks encoded at the second encoding quality level to the mobile computing device.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Inventors: Bo Han, Tan Xu, Zhengye Liu
  • Patent number: 11675419
    Abstract: A method includes obtaining a set of components that, when collectively rendered, presents an immersive experience, extracting a narrative from the set of components, learning a plurality of details of the immersive experience that exhibit variance, based on an analysis of the set of components and an analysis of the narrative, presenting a device of a creator of the immersive experience with an identification of the plurality of the details, receiving from the device of the creator, an input, wherein the input defines a variant for a default segment of one component of the set of components, and wherein the variant presents an altered form of at least one detail of the plurality of details that is presented in the default segment, and storing the set of components, the variant, and information indicating how and when to present the variant to a user device in place of the default segment.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: June 13, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Richard Palazzo, Eric Zavesky, Tan Xu, Jean-Francois Paiement
  • Publication number: 20230179816
    Abstract: The disclosed technology is directed towards inserting user-personalized or other user-related supplementary media content into primary media content being presented to the user. The personalized media content can be inserted into available insertion slots associated with the primary media content. The inserted content is based on the context of the primary media, e.g., a location or theme of a movie scene. For example, upon obtaining primary media content that is video, supplementary media content related to a group of frames of the primary media content can be determined. Supplementary media content is combined with the primary media content at a presentation position associated with the group of frames to output modified media content. For a video, for example, the supplementary content can be inserted between scenes, overlaid onto a scene, or presented proximate a scene.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230172534
    Abstract: The technology described herein is generally directed towards collecting digital twin datasets from individual users to represent their individual physical, emotional, chemical and/or environmental conditions. A user's digital twin is matched to one or more other digital twins with similar physical, emotional, chemical, and/or environmental conditions. The matched digital twins can share data and learnings via a virtual anonymous relationship. Multiple digital twins that represent users with similar conditions may be found and treated collectively as a group; a user via his or her digital twin can poll the group to receive and process responses from each respondent. A digital twin can be a therapist or a researcher who emulates a patient and uses the emulated digital twin as a proxy to monitor and process the results of other digital twins.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230179942
    Abstract: The technology described herein is generally directed towards the use of spatial audio to provide directional cues to assist a user in looking in a desired direction in a user perceived environment, which can be a real-world or a virtual environment. Location data for a user in three-dimensional space can be obtained. A direction of view of the user within the user-perceived environment is determined. Spatial audio can be output that is perceived as coming from a position within the user-perceived environment, such as to provide directional cues directed to changing the direction of view of the user to a different view. The spatial audio can provide prompts to the user to adjust his or her vision towards the desired direction, for example. The user's location and direction of view can be mapped to audio that is relevant to the user's direction of view when at that location.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch