Patents by Inventor Xin Tong

Xin Tong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250047806
    Abstract: Methods and systems for real-time video enhancement are provided herein. A current frame of a video stream generated by a client device of a plurality of client devices participating in the video conference is identified during a video conference. An enhanced previous frame corresponding to an enhanced version of a previous frame in the video stream is identified. At least the current frame and the enhanced previous frame are provided as input to a machine-learning model. An output of the machine learning model is obtained. The output of the machine learning model indicates an enhanced current frame corresponding to an enhanced version of the current frame. The current frame is replaced with the enhanced current frame in the video stream.
    Type: Application
    Filed: August 2, 2023
    Publication date: February 6, 2025
    Inventors: Anne Menini, Jeya Maria Jose Valanarasu, Rahul Garg, Andeep Singh Toor, Xin Tong, Weijuan Xi
  • Patent number: 12208284
    Abstract: A controller (600) for a radiotherapy device (320) is provided; the radiotherapy device (320) being configured to provide therapeutic radiation to a patient (308) via a source (300) of therapeutic radiation, wherein the radiotherapy device (320) comprises a first rotatable member (304), the rotation of which can alter a physical attribute of the therapeutic radiation provided, and a patient support member (310), which is linearly moveable in at least one of a longitudinal direction and a lateral direction. The controller (600) comprises a first rotatable actuator (608) for controlling a movement of the first rotatable member (304) and a second actuator (620) for controlling a movement of the patient support member (310).
    Type: Grant
    Filed: November 26, 2020
    Date of Patent: January 28, 2025
    Assignee: ELEKTA BEIJING MEDICAL SYSTEMS CO., LTD
    Inventors: Xin Tong, Andrew Jones, Tong Yang, Weicheng Zhao
  • Patent number: 12212911
    Abstract: The present disclosure provides an earphone comprising a sound production component and an ear hook. In a wearing state, the ear hook is configured to place the sound production component at a position near an ear canal but not blocking the ear canal. An inner contour of the ear hook's projection on a user's sagittal plane includes a first curve having an extremum point in a first direction. The first direction is perpendicular to a long-axis direction of a projection of the sound production component on the sagittal plane. The extremum point is located behind a projection point of an upper vertex of the ear hook on the sagittal plane, and the upper vertex is a highest point of an inner contour of the ear hook along the user's vertical axis. An inclination angle of the long-axis direction relative to a horizontal direction is within a range of 13°-21°.
    Type: Grant
    Filed: April 7, 2024
    Date of Patent: January 28, 2025
    Assignee: SHENZHEN SHOKZ CO., LTD.
    Inventors: Lei Zhang, Peigeng Tong, Guolin Xie, Yongjian Li, Jiang Xu, Tao Zhao, Duoduo Wu, Ao Ji, Xin Qi, Zeying Zheng, Haofeng Zhang
  • Patent number: 12212908
    Abstract: The present disclosure discloses an acoustic output device. The acoustic output device may include a speaker assembly, configured to convert audio signals into vibration signals; a functional assembly electrically connected to the speaker assembly; and a supporting structure, configured to be connected to the speaker assembly and the functional assembly, wherein the supporting structure includes a metal body therein, and the metal body may be electrically connected to the functional assembly.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: January 28, 2025
    Assignee: SHENZHEN SHOKZ CO., LTD.
    Inventors: Lei Zhang, Zhen Wang, Liwei Wang, Peigeng Tong, Fengyun Liao, Xin Qi, Xianwei Shi, Shuailin Xie, Yunbin Chen
  • Patent number: 12212916
    Abstract: The present disclosure provides a headphone including a sound production component and an ear hook. The ear hook and the sound production component form a first projection on a user's sagittal plane. In a non-wearing state, an inner contour, a first end contour, a second end contour of the first projection, and a tangent segment connecting the first end contour and the second end contour jointly define a first closed curve. A first area of the first closed curve ranges 300 mm2-500 mm2. A portion of the inner contour corresponding to the ear hook includes a first curve. The first curve has an extremum point in a first direction perpendicular to a long axis direction of a projection of the sound production component, the extremum point is located behind a projection point of an upper vertex of the ear hook on the sagittal plane.
    Type: Grant
    Filed: April 2, 2024
    Date of Patent: January 28, 2025
    Assignee: SHENZHEN SHOKZ CO., LTD.
    Inventors: Jiang Xu, Haofeng Zhang, Zeying Zheng, Lei Zhang, Shanyong Gu, Hongqiang Zhao, Peigeng Tong, Guolin Xie, Yongjian Li, Tao Zhao, Duoduo Wu, Ao Ji, Xin Qi, Liwei Wang, Zhen Wang
  • Publication number: 20250030816
    Abstract: According to implementations of the subject matter described herein, there is provided a solution for an immersive video conference. In the solution, a conference mode for the video conference is determined at first, the conference mode indicating a layout of a virtual conference space for the video conference, and viewpoint information associated with the second participant in the video conference is determined based on the layout. Furthermore, a first view of the first participant is determined based on the viewpoint information and then sent to a conference device associated with the second participant to display a conference image to the second participant. Thereby, on the one hand, it is possible to enable the video conference participants to obtain a more authentic and immersive video conference experience, and on the other hand, to obtain a desired virtual conference space layout according to needs more flexibly.
    Type: Application
    Filed: November 10, 2022
    Publication date: January 23, 2025
    Inventors: Jiaolong YANG, Yizhong Zhang, Xin TONG, Baining GUO
  • Publication number: 20250024188
    Abstract: Provided are a core module and an electronic device. The core module may include a core housing, a speaker, and a bracket. The bracket and the speaker may form an acoustic cavity, the core housing may include an acoustic hole, the bracket may include an acoustic channel, and the speaker may include first accommodation space in flow communication with the acoustic cavity. The speaker, the bracket, and the core housing may form second accommodation space that is outside the speaker and isolated from the acoustic cavity. The speaker may include a coil, a frame, and two metal members disposed on the frame. Each of the two metal members may include a first pad, a second pad, and a transition portion. The first pad and the second pad may be exposed from the frame, the first pad may be located within the first accommodation space and connected to the coil.
    Type: Application
    Filed: September 29, 2024
    Publication date: January 16, 2025
    Applicants: SHENZHEN SHOKZ CO., LTD., KING TONE INNOVATION (BEIJING) TECHNOLOGY CO. LTD.
    Inventors: Lei ZHANG, Peigeng TONG, Guolin XIE, Shanyong GU, Hongqiang ZHAO, Xin QI
  • Publication number: 20240374681
    Abstract: Provided herein are stable hypotonic or isotonic formulations containing active ingredients, such as antiviral compositions, or anti-retroviral compositions for intrarectal delivery to provide prophylaxis against viral infections.
    Type: Application
    Filed: September 2, 2022
    Publication date: November 14, 2024
    Inventors: Lisa Cencia Rohan, Xin Tong, Lin Wang
  • Publication number: 20240312187
    Abstract: In various examples, feature tracking for autonomous or semi-autonomous systems and applications is described herein. Systems and methods are disclosed that merge, using one or more processes, features detected using a feature tracker(s) and features detected using a feature detector(s) in order to track features between images. In some examples, the number of merged features and/or the locations of the merged features within the images are limited. This way, the systems and methods are able to identify merged features that are of greater importance for tracking while refraining from tracking merged features that are of less importance. For example, if the systems and methods are being used to identify features for autonomous driving, a greater number of merged features that are associated with objects located proximate to the driving surface may be tracked as compared to merged features that are associated with the sky.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 19, 2024
    Inventors: Yue Wu, Cheng-Chieh Yang, Xin Tong, Minwoo Park
  • Patent number: 12079936
    Abstract: In accordance with implementations of the present disclosure, there is provided a solution for portrait editing and synthesis. In this solution, a first image about a head of a user is obtained. A three-dimensional head model representing the head of the user is generated based on the first image. In response to receiving a command of changing a head feature of the user, the three-dimensional head model is transformed to reflect the changed head feature. A second image about the head of the user is generated based on the transformed three-dimensional head model, and reflects the changed head feature of the user. In this way, the solution can realize editing of features like a head pose and/or a facial expression based on a single portrait image without manual intervention and automatically synthesize a corresponding image.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: September 3, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jiaolong Yang, Fang Wen, Dong Chen, Xin Tong
  • Publication number: 20240256738
    Abstract: Provided is an environmental sensing method based on model evolution, and the method performs a channel estimation task under the current communication system to obtain channel response data, thereby realizing environmental sensing. Firstly, an interaction mechanism between electromagnetic waves and environmental objective is divided into reflection and transmission. Subsequently, a mathematical model of channel response and environmental objective is constructed, and an environmental sensing problem is modeled as a compressed sensing optimization problem. Lastly, the present disclosure initiates from a baseline model and enables iteration and evolution of the model by solving the objective to solve the compressed sensing optimization problem, ultimately achieving environmental sensing.
    Type: Application
    Filed: April 8, 2024
    Publication date: August 1, 2024
    Inventors: Zhaoyang ZHANG, Yihan ZHANG, Xin TONG
  • Patent number: 12019181
    Abstract: Provided is an iterative focused millimeter wave integrated communication and sensing method, which converts an environmental sensing problem into a compressed sensing reconstruction problem, and realizes the initial coarse sensing of the environment based on an approximate message passing algorithm; according to a background determining method, the present disclosure divides and determines a target object, removes the influence of background scatters on a receiving signal, and removes the background scatters repeatedly and iteratively, so as to obtain a more accurate focus sensing result of the target object.
    Type: Grant
    Filed: July 26, 2023
    Date of Patent: June 25, 2024
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Zhaoyang Zhang, Xin Tong, Yihan Zhang
  • Publication number: 20240161382
    Abstract: According to implementations of the present disclosure, there is provided a solution for completing textures of an object. In this solution, a complete texture map of an object is generated from a partial texture map of the object according to a texture generation model. A first prediction on whether a texture of at least one block in the complete texture map is an inferred texture is determined according to a texture discrimination model. A second image of the object is generated based on the complete texture map. A second prediction on whether the first image and the second image are generated images is determined according to an image discrimination model. The texture generation model, the texture and image discrimination models are trained based on the first and second predictions.
    Type: Application
    Filed: April 26, 2021
    Publication date: May 16, 2024
    Inventors: Jongyoo KIM, Jiaolong YANG, Xin TONG
  • Publication number: 20240135576
    Abstract: According to implementations of the subject matter described herein, a solution is proposed for three-dimensional (3D) object detection. In this solution, feature representations of a plurality of points are extracted from point cloud data related to a 3D object. Initial feature representations of a set of candidate 3D objects are determined based on the feature representations of the plurality of points. Based on the feature representations of the plurality of points and the initial feature representations of the set of candidate 3D objects, a detection result for the 3D object is generated by determining self-correlations between the set of candidate 3D objects and cross-correlations between the plurality of points and the set of candidate 3D objects. In this way, without grouping points into candidate 3D objects, the 3D object in a 3D scene can be localized and recognized based on the self-correlations and cross-correlations.
    Type: Application
    Filed: February 8, 2022
    Publication date: April 25, 2024
    Inventors: Zheng Zhang, Han Hu, Yue Cao, Xin TONG, Ze Liu
  • Publication number: 20240062657
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Application
    Filed: October 20, 2023
    Publication date: February 22, 2024
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Publication number: 20240045026
    Abstract: Provided is an iterative focused millimeter wave integrated communication and sensing method, which converts an environmental sensing problem into a compressed sensing reconstruction problem, and realizes the initial coarse sensing of the environment based on an approximate message passing algorithm; according to a background determining method, the present disclosure divides and determines a target object, removes the influence of background scatters on a receiving signal, and removes the background scatters repeatedly and iteratively, so as to obtain a more accurate focus sensing result of the target object.
    Type: Application
    Filed: July 26, 2023
    Publication date: February 8, 2024
    Inventors: Zhaoyang ZHANG, Xin TONG, Yihan ZHANG
  • Patent number: 11854401
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: December 26, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Patent number: 11845970
    Abstract: The present invention provides for recombinant Endo-S2 mutants (named Endo-S2 glycosynthases) that exhibit reduced hydrolysis activity and increased transglycosylation activity for the synthesis of glycoproteins wherein a desired sugar chain is added to a fucosylated or nonfucosylated GlcNAc-IgG acceptor. As such, the present invention allows for the synthesis and remodeling of therapeutic antibodies thereby providing for certain biological activities, such as, prolonged half-life time in vivo, less immunogenicity, enhanced in vivo activity, increased targeting ability, and/or ability to deliver a therapeutic agent.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: December 19, 2023
    Assignee: University of Maryland, College Park
    Inventors: Lai-Xi Wang, Qiang Yang, Tiezheng Li, Xin Tong
  • Publication number: 20230348947
    Abstract: The present invention provides for the use of recombinant Endo-S2 mutants (named Endo-S2 glycosynthases) that exhibit reduced hydrolysis activity and increased transglycosylation activity for the synthesis of glycoproteins wherein a desired sugar chain is added to a fucosylated or nonfucosylated GlcNAc-IgG acceptor. As such, the present invention allows for the synthesis and remodeling of therapeutic antibodies thereby providing for certain biological activities, such as, prolonged half-life time in vivo, less immunogenicity, enhanced in vivo activity, increased targeting ability, and/or ability to deliver a therapeutic agent.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 2, 2023
    Applicant: University of Maryland, College Park
    Inventors: Lai-Xi Wang, Qiang Yang, Tiezheng Li, Xin Tong
  • Patent number: D1029263
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: May 28, 2024
    Assignee: Elekta Instrument AB
    Inventors: Peter Martins von Zweigbergk, Andrew Jones, Xin Tong