Patents by Inventor Xin Tong

Xin Tong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250119115
    Abstract: The present application discloses a bulk acoustic wave resonator and a method for manufacturing the same. The bulk acoustic wave resonator includes a substrate and a plurality of resonance assemblies arranged on the substrate, each of the plurality of resonance assemblies includes a bottom electrode, a piezoelectric layer, and a top electrode which are arranged on the substrate in sequence; the plurality of resonance assemblies are connected in sequence to form a connecting ring; the top electrode of one resonance assembly in two adjacent resonance assemblies is connected to the bottom electrode of the other resonance assembly in the two adjacent resonance assemblies, and the top electrodes of two target resonance assemblies spaced apart by one resonance assembly are connected to each other to transmit an input signal; and the bottom electrodes of the two target resonance assemblies are connected to each other to transmit an output signal.
    Type: Application
    Filed: May 20, 2024
    Publication date: April 10, 2025
    Inventors: Jinhao DAI, Jinxian ZHANG, Tingting YANG, Si CHEN, Hanlong YUAN, Xin TONG, Liangyu LU, Guoqiang WU, Jian WANG, Bowoon SOON, Chengliang SUN
  • Publication number: 20250093636
    Abstract: Hyper-Heisenberg scaling quantum imaging techniques that pass an idler photon of each entangled photon pair three times through an idler objective pair and pass a signal photon of each entangled photon pair at least once through a signal objective pair and use measurements of coincidence detection to yield a coincidence image with spatial resolution of about four times that of classical imaging.
    Type: Application
    Filed: September 20, 2024
    Publication date: March 20, 2025
    Inventors: Lihong Wang, Xin Tong, Zhe He, Yide Zhang
  • Publication number: 20250047806
    Abstract: Methods and systems for real-time video enhancement are provided herein. A current frame of a video stream generated by a client device of a plurality of client devices participating in the video conference is identified during a video conference. An enhanced previous frame corresponding to an enhanced version of a previous frame in the video stream is identified. At least the current frame and the enhanced previous frame are provided as input to a machine-learning model. An output of the machine learning model is obtained. The output of the machine learning model indicates an enhanced current frame corresponding to an enhanced version of the current frame. The current frame is replaced with the enhanced current frame in the video stream.
    Type: Application
    Filed: August 2, 2023
    Publication date: February 6, 2025
    Inventors: Anne Menini, Jeya Maria Jose Valanarasu, Rahul Garg, Andeep Singh Toor, Xin Tong, Weijuan Xi
  • Patent number: 12208284
    Abstract: A controller (600) for a radiotherapy device (320) is provided; the radiotherapy device (320) being configured to provide therapeutic radiation to a patient (308) via a source (300) of therapeutic radiation, wherein the radiotherapy device (320) comprises a first rotatable member (304), the rotation of which can alter a physical attribute of the therapeutic radiation provided, and a patient support member (310), which is linearly moveable in at least one of a longitudinal direction and a lateral direction. The controller (600) comprises a first rotatable actuator (608) for controlling a movement of the first rotatable member (304) and a second actuator (620) for controlling a movement of the patient support member (310).
    Type: Grant
    Filed: November 26, 2020
    Date of Patent: January 28, 2025
    Assignee: ELEKTA BEIJING MEDICAL SYSTEMS CO., LTD
    Inventors: Xin Tong, Andrew Jones, Tong Yang, Weicheng Zhao
  • Publication number: 20250030816
    Abstract: According to implementations of the subject matter described herein, there is provided a solution for an immersive video conference. In the solution, a conference mode for the video conference is determined at first, the conference mode indicating a layout of a virtual conference space for the video conference, and viewpoint information associated with the second participant in the video conference is determined based on the layout. Furthermore, a first view of the first participant is determined based on the viewpoint information and then sent to a conference device associated with the second participant to display a conference image to the second participant. Thereby, on the one hand, it is possible to enable the video conference participants to obtain a more authentic and immersive video conference experience, and on the other hand, to obtain a desired virtual conference space layout according to needs more flexibly.
    Type: Application
    Filed: November 10, 2022
    Publication date: January 23, 2025
    Inventors: Jiaolong YANG, Yizhong Zhang, Xin TONG, Baining GUO
  • Publication number: 20240374681
    Abstract: Provided herein are stable hypotonic or isotonic formulations containing active ingredients, such as antiviral compositions, or anti-retroviral compositions for intrarectal delivery to provide prophylaxis against viral infections.
    Type: Application
    Filed: September 2, 2022
    Publication date: November 14, 2024
    Inventors: Lisa Cencia Rohan, Xin Tong, Lin Wang
  • Publication number: 20240312187
    Abstract: In various examples, feature tracking for autonomous or semi-autonomous systems and applications is described herein. Systems and methods are disclosed that merge, using one or more processes, features detected using a feature tracker(s) and features detected using a feature detector(s) in order to track features between images. In some examples, the number of merged features and/or the locations of the merged features within the images are limited. This way, the systems and methods are able to identify merged features that are of greater importance for tracking while refraining from tracking merged features that are of less importance. For example, if the systems and methods are being used to identify features for autonomous driving, a greater number of merged features that are associated with objects located proximate to the driving surface may be tracked as compared to merged features that are associated with the sky.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 19, 2024
    Inventors: Yue Wu, Cheng-Chieh Yang, Xin Tong, Minwoo Park
  • Patent number: 12079936
    Abstract: In accordance with implementations of the present disclosure, there is provided a solution for portrait editing and synthesis. In this solution, a first image about a head of a user is obtained. A three-dimensional head model representing the head of the user is generated based on the first image. In response to receiving a command of changing a head feature of the user, the three-dimensional head model is transformed to reflect the changed head feature. A second image about the head of the user is generated based on the transformed three-dimensional head model, and reflects the changed head feature of the user. In this way, the solution can realize editing of features like a head pose and/or a facial expression based on a single portrait image without manual intervention and automatically synthesize a corresponding image.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: September 3, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jiaolong Yang, Fang Wen, Dong Chen, Xin Tong
  • Publication number: 20240256738
    Abstract: Provided is an environmental sensing method based on model evolution, and the method performs a channel estimation task under the current communication system to obtain channel response data, thereby realizing environmental sensing. Firstly, an interaction mechanism between electromagnetic waves and environmental objective is divided into reflection and transmission. Subsequently, a mathematical model of channel response and environmental objective is constructed, and an environmental sensing problem is modeled as a compressed sensing optimization problem. Lastly, the present disclosure initiates from a baseline model and enables iteration and evolution of the model by solving the objective to solve the compressed sensing optimization problem, ultimately achieving environmental sensing.
    Type: Application
    Filed: April 8, 2024
    Publication date: August 1, 2024
    Inventors: Zhaoyang ZHANG, Yihan ZHANG, Xin TONG
  • Patent number: 12019181
    Abstract: Provided is an iterative focused millimeter wave integrated communication and sensing method, which converts an environmental sensing problem into a compressed sensing reconstruction problem, and realizes the initial coarse sensing of the environment based on an approximate message passing algorithm; according to a background determining method, the present disclosure divides and determines a target object, removes the influence of background scatters on a receiving signal, and removes the background scatters repeatedly and iteratively, so as to obtain a more accurate focus sensing result of the target object.
    Type: Grant
    Filed: July 26, 2023
    Date of Patent: June 25, 2024
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Zhaoyang Zhang, Xin Tong, Yihan Zhang
  • Publication number: 20240161382
    Abstract: According to implementations of the present disclosure, there is provided a solution for completing textures of an object. In this solution, a complete texture map of an object is generated from a partial texture map of the object according to a texture generation model. A first prediction on whether a texture of at least one block in the complete texture map is an inferred texture is determined according to a texture discrimination model. A second image of the object is generated based on the complete texture map. A second prediction on whether the first image and the second image are generated images is determined according to an image discrimination model. The texture generation model, the texture and image discrimination models are trained based on the first and second predictions.
    Type: Application
    Filed: April 26, 2021
    Publication date: May 16, 2024
    Inventors: Jongyoo KIM, Jiaolong YANG, Xin TONG
  • Publication number: 20240135576
    Abstract: According to implementations of the subject matter described herein, a solution is proposed for three-dimensional (3D) object detection. In this solution, feature representations of a plurality of points are extracted from point cloud data related to a 3D object. Initial feature representations of a set of candidate 3D objects are determined based on the feature representations of the plurality of points. Based on the feature representations of the plurality of points and the initial feature representations of the set of candidate 3D objects, a detection result for the 3D object is generated by determining self-correlations between the set of candidate 3D objects and cross-correlations between the plurality of points and the set of candidate 3D objects. In this way, without grouping points into candidate 3D objects, the 3D object in a 3D scene can be localized and recognized based on the self-correlations and cross-correlations.
    Type: Application
    Filed: February 8, 2022
    Publication date: April 25, 2024
    Inventors: Zheng Zhang, Han Hu, Yue Cao, Xin TONG, Ze Liu
  • Publication number: 20240062657
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Application
    Filed: October 20, 2023
    Publication date: February 22, 2024
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Publication number: 20240045026
    Abstract: Provided is an iterative focused millimeter wave integrated communication and sensing method, which converts an environmental sensing problem into a compressed sensing reconstruction problem, and realizes the initial coarse sensing of the environment based on an approximate message passing algorithm; according to a background determining method, the present disclosure divides and determines a target object, removes the influence of background scatters on a receiving signal, and removes the background scatters repeatedly and iteratively, so as to obtain a more accurate focus sensing result of the target object.
    Type: Application
    Filed: July 26, 2023
    Publication date: February 8, 2024
    Inventors: Zhaoyang ZHANG, Xin TONG, Yihan ZHANG
  • Patent number: 11854401
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: December 26, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Patent number: 11845970
    Abstract: The present invention provides for recombinant Endo-S2 mutants (named Endo-S2 glycosynthases) that exhibit reduced hydrolysis activity and increased transglycosylation activity for the synthesis of glycoproteins wherein a desired sugar chain is added to a fucosylated or nonfucosylated GlcNAc-IgG acceptor. As such, the present invention allows for the synthesis and remodeling of therapeutic antibodies thereby providing for certain biological activities, such as, prolonged half-life time in vivo, less immunogenicity, enhanced in vivo activity, increased targeting ability, and/or ability to deliver a therapeutic agent.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: December 19, 2023
    Assignee: University of Maryland, College Park
    Inventors: Lai-Xi Wang, Qiang Yang, Tiezheng Li, Xin Tong
  • Publication number: 20230348947
    Abstract: The present invention provides for the use of recombinant Endo-S2 mutants (named Endo-S2 glycosynthases) that exhibit reduced hydrolysis activity and increased transglycosylation activity for the synthesis of glycoproteins wherein a desired sugar chain is added to a fucosylated or nonfucosylated GlcNAc-IgG acceptor. As such, the present invention allows for the synthesis and remodeling of therapeutic antibodies thereby providing for certain biological activities, such as, prolonged half-life time in vivo, less immunogenicity, enhanced in vivo activity, increased targeting ability, and/or ability to deliver a therapeutic agent.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 2, 2023
    Applicant: University of Maryland, College Park
    Inventors: Lai-Xi Wang, Qiang Yang, Tiezheng Li, Xin Tong
  • Publication number: 20230351769
    Abstract: In various examples, systems and methods for machine learning based hazard detection for autonomous machine applications using stereo disparity are presented. Disparity between a stereo pair of images is used to generate a path disparity model. Using the path disparity model, a machine learning model can recognize when a pixel in the first image corresponds to a pixel in the second image even though the pixel in the two images does not have identical characteristics. Similarities in extracted feature vectors can be computed and represented by a vector similarity metric that is input to a machine learning classifier, along with feature information extracted from the stereo image pair, to differentiate hazard pixels from non-hazard pixels. In some embodiments, a V-space disparity map, where a first axis corresponds to disparity values and the second axis corresponds to pixel rows, may be used to simplify estimation of the path disparity model.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Inventors: Yue WU, Liwen Lin, Xin Tong, Gang Pan
  • Patent number: 11727654
    Abstract: Implementations of the subject matter described herein relate to mixed reality object rendering based on ambient light conditions. According to the embodiments of the subject matter described herein, while rendering an object a wearable computing device acquires light conditions of the real world, thereby increasing the reality of the rendered object. In particular, the wearable computing deice is configured to acquire an image of an environment where the wearable computing deice is located. The image is adjusted based on a cement parameter used when the image is captured. Subsequently, ambient light information is determined based on the adjusted image. In this way, the wearable computing deice can obtain more real and accurate emblem light information, so as to render to the user an object with enhanced reality. Accordingly, the user can have a better interaction experience.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: August 15, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Guojun Chen, Yue Dong, Xin Tong, Yingnan Ju, Pingchao Yu
  • Patent number: D1029263
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: May 28, 2024
    Assignee: Elekta Instrument AB
    Inventors: Peter Martins von Zweigbergk, Andrew Jones, Xin Tong