Patents by Inventor Xin Tong

Xin Tong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210149536
    Abstract: A method for capturing an image is disclosed in the present disclosure. The method includes: detecting first information associated with a first application that is running, wherein the first information is configured to characterize an attribute of the first application; capturing an image output by the first application in a capturing fashion associated with the image output by the first application based on a first capturing policy in the case that the first information associated with the first application exists in the first database; capturing the image output by the first application based on a second capturing policy in the case that the first information associated with the first application does not exist in the first database. The present disclosure also discloses a terminal and a storage medium.
    Type: Application
    Filed: December 27, 2018
    Publication date: May 20, 2021
    Inventors: Sheng GAO, Li LIU, Yue MA, Jiaxiong CHENG, Xin TONG, Jiebo MA
  • Patent number: 11008601
    Abstract: The present invention provides for recombinant Endo-S2 mutants (named Endo-S2 glycosynthases) that exhibit reduced hydrolysis activity and increased transglycosylation activity for the synthesis of glycoproteins wherein a desired sugar chain is added to a fucosylated or nonfucosylated GlcNAc-IgG acceptor. As such, the present invention allows for the synthesis and remodeling of therapeutic antibodies thereby providing for certain biological activities, such as, prolonged half-life time in vivo, less immunogenicity, enhanced in vivo activity, increased targeting ability, and/or ability to deliver a therapeutic agent.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: May 18, 2021
    Assignee: UNIVERSITY OF MARYLAND
    Inventors: Lai-Xi Wang, Qiang Yang, Tiezheng Li, Xin Tong
  • Publication number: 20210134043
    Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
    Type: Application
    Filed: January 11, 2021
    Publication date: May 6, 2021
    Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
  • Patent number: 10984222
    Abstract: The present disclosure provides method, apparatus and system for 3-dimension (3D) face tracking. The method for 3D face tracking may comprise: obtaining a 2-dimension (2D) face image; performing a local feature regression on the 2D face image to determine 3D face representation parameters corresponding to the 2D face image; and generating a 3D facial mesh and corresponding 2D facial landmarks based on the determined 3D face representation parameters. The present disclosure may improve tracking accuracy and reduce memory cost, and accordingly may be effectively applied in broader application scenarios.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: April 20, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hsiang-Tao Wu, Xin Tong, Yangang Wang, Fang Wen
  • Publication number: 20210090338
    Abstract: Implementations of the subject matter described herein relate to mixed reality object rendering based on ambient light conditions. According to the embodiments of the subject matter described herein, while rendering an object, a wearable computing device acquires light conditions of the real world, thereby increasing the reality of the rendered object. In particular, the wearable computing device is configured to acquire an image of an environment where the wearable computing device is located. The image is adjusted based on a camera parameter used when the image is captured. Subsequently, ambient light information is determined based on the adjusted image. In this way, the wearable computing device can obtain more real and accurate ambient light information, so as to render to the user an object with enhanced reality. Accordingly, the user can have a better interaction experience.
    Type: Application
    Filed: June 21, 2018
    Publication date: March 25, 2021
    Inventors: Guojun Chen, Yue Dong, Xin Tong, Yingnan Ju, Chaos Yu
  • Patent number: 10916047
    Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: February 9, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
  • Publication number: 20210027526
    Abstract: In accordance with implementations of the subject matter described herein, there is provided a solution of lighting estimation. In the solution, an input image about a real object and a depth map corresponding to the input image are obtained. A geometric structure of the scene in the input image is determined based on the depth map. Shading and shadow information on the real object caused by a light source in the scene is determined based on the determined geometric structure of the scene. Then, a lighting condition in the scene caused by the light source is determined based on the input image and the shading and shadow information. The virtual object rendered using the lighting condition obtained according to the solution can exhibit a realistic effect consistent with the real object.
    Type: Application
    Filed: May 13, 2019
    Publication date: January 28, 2021
    Inventors: Yue DONG, Guojun CHEN, Xin TONG
  • Patent number: 10882754
    Abstract: The present invention provides a method for preparing a transparent free-standing titanium dioxide nanotube array film. In the method, with the titanium foil as a substrate, the titanium dioxide nanotube array film is obtained by anode oxidation on the surface of the titanium foil. Upon high temperature annealing, the titanium dioxide nanotube array film naturally falls off to obtain the transparent free-standing titanium dioxide nanotube array film. The method according to the present invention features simple operations, saves time and cost. With the method, a completely strippable titanium dioxide nanotube array film may be prepared, and in addition, morphology of the titanium dioxide nanotube is not damaged. The free-standing and complete titanium dioxide nanotube array film facilitates transfer and post-treatment, has the feature of transparency and may be in favor of the applications to the studies such as photocatalysis and the like.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: January 5, 2021
    Assignee: SOUTH CHINA UNIVERSITY OF TECHNOLOGY
    Inventors: Wenhao Shen, Xin Tong, Xiaoquan Chen
  • Patent number: 10846887
    Abstract: Techniques and constructs can determine an albedo map and a shading map from a digital image. The albedo and shading maps can be determined based at least in part on a color-difference threshold. A color shading map can be determined based at least in part on the albedo map, and lighting coefficients determined based on the color shading map. The digital image can be adjusted based at least in part on the lighting coefficients. In some examples, respective shading maps can be produced for individual color channels of the digital image. The color shading map can be produced based at least in part on the shading maps. In some examples, a plurality of regions of the digital image can be determined, as can proximity relationships between individual regions. The albedo shading maps can be determined based at least in part on the proximity relationships.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: November 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yue Dong, Xin Tong, Lin Liang, Jian Shi, Stephen S. Lin, Simon Stachniak
  • Patent number: 10836815
    Abstract: The present invention provides for recombinant Endo-S mutants (named Endo-S glycosynthases) that exhibit reduced hydrolysis activity and increased transglycosylation activity for the synthesis of glycoproteins wherein a desired sugar chain is added to a fucosylated or nonfucosylated GlcNAc-IgG acceptor. As such, the present invention allows for the synthesis and remodeling of therapeutic antibodies thereby providing for certain biological activities, such as, prolonged half-life time in vivo, less immunogenicity, enhanced in vivo activity, increased targeting ability, and/or ability to deliver a therapeutic agent.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: November 17, 2020
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Lai-Xi Wang, Xin Tong, Tiezheng Li
  • Publication number: 20200293064
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Application
    Filed: July 17, 2019
    Publication date: September 17, 2020
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Patent number: 10762657
    Abstract: In this disclosure, a solution for denoising a curve mesh is proposed. For a curve mesh including a polygonal facet, a noisy normal and a ground-truth normal of a first facet in the mesh is obtained. Then, based on the noisy normal, a first geometric feature of the first facet is determined from a plurality of neighboring facets of the first facet in the mesh. Next, based on the first geometric feature and the ground-truth normal, a mapping from the first geometric feature to the ground-truth normal of the first facet is determined for denoising the mesh.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: September 1, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Xin Tong, Yang Liu
  • Publication number: 20200235254
    Abstract: A luminescent solar concentrator (LSC) comprising a metal-free emitter. The emitter may for example be carbon-based. In particular, the emitter may comprise colloidal carbon quantum dots, also called C-dots or C-QDs or C-dots. In embodiments of the invention, the surface of the C-dots is modified.
    Type: Application
    Filed: February 16, 2018
    Publication date: July 23, 2020
    Applicants: INSTITUT NATIONAL DE LA RECHERCHE SCIENTIFIQUE, UNIVERSITY OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA
    Inventors: Yufeng ZHOU, Daniele BENETTI, Xin TONG, Lei JIN, Zhiming M. WANG, Dongling MA, Haiguang ZHAO, Federico ROSEI
  • Publication number: 20200151935
    Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
    Type: Application
    Filed: January 16, 2020
    Publication date: May 14, 2020
    Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
  • Patent number: 10573049
    Abstract: An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: February 25, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Xiang Cao, Yang Liu, Teng Han, Takaaki Shiratori, Nobuyuki Umetani, Yupeng Zhang, Xin Tong, Zhimin Ren
  • Publication number: 20200042863
    Abstract: The implementations of the subject matter described herein relate to an octree-based convolutional neural network. In some implementations, there is provided a computer-implemented method for processing a three-dimensional shape. The method comprises obtaining an octree for representing the three-dimensional shape. Nodes of the octree include empty nodes and non-empty nodes. The empty nodes exclude the three-dimensional shape and are leaf nodes of the octree, and the non-empty nodes include at least a part of the three-dimensional shape. The method further comprises for nodes in the octree with a depth associated with a convolutional layer of a convolutional neural network, performing a convolutional operation of the convolutional layer to obtain an output of the convolutional layer.
    Type: Application
    Filed: April 20, 2018
    Publication date: February 6, 2020
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Pengshuai WANG, Yang LIU, Xin TONG
  • Publication number: 20200004226
    Abstract: The claimed subject matter includes techniques for printing three-dimensional (3D) objects. An example method includes obtaining a 3D model and processing the 3D model to generate layers of tool path information. The processing includes automatically optimizing the orientation of the 3D model to reduce an amount of support material used in the printing. The method also includes printing the 3D object using layers.
    Type: Application
    Filed: September 11, 2019
    Publication date: January 2, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Emmett Lalish, Yulin Jin, Kristofer N. Iverson, Gheorghe Marius Gheorghescu, Xin Tong, Yang Liu
  • Publication number: 20190362540
    Abstract: Implementations of the subject matter described herein relate to mixed reality rendering of objects. According to the embodiments of the subject matter described herein, while rendering an object, a wearable computing device takes lighting conditions in the real world into account, thereby increasing the reality of the rendered object. In particular, the wearable computing device acquires environment lighting information of an object to be rendered and renders the object to a user based on the environment lighting information. In this way, the object rendered by the wearable computing device can be more real and accurate. The user will thus have a better interaction experience.
    Type: Application
    Filed: January 16, 2018
    Publication date: November 28, 2019
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Guojun CHEN, Yue DONG, Xin TONG
  • Publication number: 20190332846
    Abstract: The present disclosure provides method, apparatus and system for 3-dimension (3D) face tracking. The method for 3D face tracking may comprise: obtaining a 2-dimension (2D) face image; performing a local feature regression on the 2D face image to determine 3D face representation parameters corresponding to the 2D face image; and generating a 3D facial mesh and corresponding 2D facial landmarks based on the determined 3D face representation parameters. The present disclosure may improve tracking accuracy and reduce memory cost, and accordingly may be effectively applied in broader application scenarios.
    Type: Application
    Filed: July 12, 2016
    Publication date: October 31, 2019
    Inventors: Hsiang Tao Wu, Xin Tong, Yangang Wang, Fang Wen
  • Patent number: 10452053
    Abstract: The claimed subject matter includes techniques for printing three-dimensional (3D) objects. An example method includes obtaining a 3D model and processing the 3D model to generate layers of tool path information. The processing includes automatically optimizing the orientation of the 3D model to reduce an amount of support material used in the printing. The method also includes printing the 3D object using layers.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: October 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Emmett Lalish, Yulin Jin, Kristofer N. Iverson, Gheorghe Marius Gheorghescu, Xin Tong, Yang Liu