Patents by Inventor Ning Xu

Ning Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220262009
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that utilize a progressive refinement network to refine alpha mattes generated utilizing a mask-guided matting neural network. In particular, the disclosed systems can use the matting neural network to process a digital image and a coarse guidance mask to generate alpha mattes at discrete neural network layers. In turn, the disclosed systems can use the progressive refinement network to combine alpha mattes and refine areas of uncertainty. For example, the progressive refinement network can combine a core alpha matte corresponding to more certain core regions of a first alpha matte and a boundary alpha matte corresponding to uncertain boundary regions of a second, higher resolution alpha matte. Based on the combination of the core alpha matte and the boundary alpha matte, the disclosed systems can generate a final alpha matte for use in image matting processes.
    Type: Application
    Filed: February 17, 2021
    Publication date: August 18, 2022
    Inventors: Qihang Yu, Jianming Zhang, He Zhang, Yilin Wang, Zhe Lin, Ning Xu
  • Patent number: 11410056
    Abstract: A system according to which a network of physical sensors are configured to detect and track the performance of aircraft engines. The physical sensors are placed in specific locations to detect an exhaust gas temperature, vibration, speed, oil pressure, and fuel flow for each aircraft engine. The performance of each aircraft engine is then viewed in combination with oil consumption associated with that aircraft engine and the routine maintenance program associated with that aircraft engine to route the aircraft and move the aircraft, in accordance with the routing, to a specific location. The sensors efficiently track the performance and physical condition of the engines. Moreover, a listing of identified “at-risk” engines is displayed on a screen of a GUI in a manner that allows for easy navigation and display. Data point(s) that triggered the identification of each “at-risk” engine are easily accessible and viewable.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: August 9, 2022
    Assignee: AMERICAN AIRLINES, INC.
    Inventors: Ning Xu, Jose Antonio Ramirez-Hernandez, Steven James Oakley, Mei Zhang, Ou Bai, Supreet Reddy Mandala
  • Patent number: 11410439
    Abstract: Systems and methods are disclosed for capturing multiple sequences of views of a three-dimensional object using a plurality of virtual cameras. The systems and methods generate aligned sequences from the multiple sequences based on an arrangement of the plurality of virtual cameras in relation to the three-dimensional object. Using a convolutional network, the systems and methods classify the three-dimensional object based on the aligned sequences and identify the three-dimensional object using the classification.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Zhou Ren, Ning Xu, Enxu Yan, Tan Yu
  • Publication number: 20220230277
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 21, 2022
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 11379987
    Abstract: A temporal object segmentation system determines a location of an object depicted in a video. In some cases, the temporal object segmentation system determines the object's location in a particular frame of the video based on information indicating a previous location of the object in a previous video frame. For example, an encoder neural network in the temporal object segmentation system extracts features describing image attributes of a video frame. A convolutional long-short term memory neural network determines the location of the object in the frame, based on the extracted image attributes and information indicating a previous location in a previous frame. A decoder neural network generates an image mask indicating the object's location in the frame. In some cases, a video editing system receives multiple generated masks for a video, and modifies one or more video frames based on the locations indicated by the masks.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: July 5, 2022
    Assignee: ADOBE INC.
    Inventors: Ning Xu, Brian Price, Scott Cohen
  • Publication number: 20220207751
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system is trained to generate a matte for an input image utilizing contextual information within the image. Patches from the image and a corresponding trimap are extracted, and alpha values for each individual image patch are predicted based on correlations of features in different regions within the image patch. Predicting alpha values for an image patch may also be based on contextual information from other patches extracted from the same image. This contextual information may be determined by determining correlations between features in the query patch and context patches. The predicted alpha values for an image patch form a matte patch, and all matte patches generated for the patches are stitched together to form an overall matte for the input image.
    Type: Application
    Filed: March 16, 2022
    Publication date: June 30, 2022
    Inventor: Ning XU
  • Publication number: 20220198753
    Abstract: An image data annotation system automatically annotates a physical object within individual images frames of an image sequence with relevant object annotations based on a three-dimensional (3D) model of the physical object. Annotating the individual image frames with object annotations includes updating individual image frames within image input data to generate annotated image data that is suitable for reliably training a DNN object detection architecture. Exemplary object annotations that the image data annotation system can automatically apply to individual image frames include, inter alia, object pose, image pose, object masks, 3D bounding boxes composited over the physical object, 2D bounding boxes composited over the physical object, and/or depth map information.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: Harpreet Singh SAWHNEY, Ning XU, Amol Ashok AMBARDEKAR, Moses Obadeji OLAFENWA
  • Patent number: 11361186
    Abstract: The present disclosure discloses a visual relationship detection method based on adaptive clustering learning, including: detecting visual objects from an input image and recognizing the visual objects to obtain context representation; embedding the context representation of pair-wise visual objects into a low-dimensional joint subspace to obtain a visual relationship sharing representation; embedding the context representation into a plurality of low-dimensional clustering subspaces, respectively, to obtain a plurality of preliminary visual relationship enhancing representation; and then performing regularization by clustering-driven attention mechanism; fusing the visual relationship sharing representations and regularized visual relationship enhancing representations with a prior distribution over the category label of visual relationship predicate, to predict visual relationship predicates by synthetic relational reasoning.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: June 14, 2022
    Assignee: TIANJIN UNIVERSITY
    Inventors: Anan Liu, Yanhui Wang, Ning Xu, Weizhi Nie
  • Publication number: 20220178700
    Abstract: Among other things, techniques are described for identifying sensor data from a sensor of a first vehicle that includes information related to a pose of at least two other vehicles on a road. The technique further includes determining a geometry of a portion of the road based at least in part on the information about the pose of the at least two other vehicles. The technique further includes comparing the geometry of the portion of the road with map data to identify a match between the portion of the road and a portion of the map data. The technique further includes determining a pose of the first vehicle relative to the map data based at least in part on the match.
    Type: Application
    Filed: December 3, 2020
    Publication date: June 9, 2022
    Inventors: Yimu Wang, Ning Xu, Ajay Charan, Yih-Jye Hsu
  • Publication number: 20220172003
    Abstract: Disclosed herein are arrangements that facilitate the transfer of knowledge from models for a source data-processing domain to models for a target data-processing domain. A convolutional neural network space for a source domain is factored into a first classification space and a first reconstruction space. The first classification space stores class information and the first reconstruction space stores domain-specific information. A convolutional neural network space for a target domain is factored into a second classification space and a second reconstruction space. The second classification space stores class information and the second reconstruction space stores domain-specific information. Distribution of the first classification space and the second classification space is aligned.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 2, 2022
    Inventors: Jianchao Yang, Ning Xu, Jian Ren
  • Publication number: 20220156956
    Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
  • Publication number: 20220148325
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Application
    Filed: January 26, 2022
    Publication date: May 12, 2022
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Publication number: 20220148183
    Abstract: Introduced here are computer programs and associated computer-implemented techniques for training and then applying computer-implemented models designed for segmentation of an object in the frames of video. By training and then applying the segmentation model in a cyclical manner, the errors encountered when performing segmentation can be eliminated rather than propagated. In particular, the approach to segmentation described herein allows the relationship between a reference mask and each target frame for which a mask is to be produced to be explicitly bridged or established. Such an approach ensures that masks are accurate, which in turn means that the segmentation model is less prone to distractions.
    Type: Application
    Filed: January 26, 2022
    Publication date: May 12, 2022
    Inventor: Ning Xu
  • Publication number: 20220136733
    Abstract: The present invention discloses a film type liquid heater and a uniform heating method thereof. End fixation plates are arranged at two ends of a barrel. A heating pipe is arranged in the barrel. Pipe ports run through the two end fixation plates, respectively. Connecting pipes are arranged in the pipe ports. Sealing connection components are arranged at inner ends of the two connecting pipes. A heating film layer is coated outside the heating pipe. Electrode layers are connected left and right sides of the heating film layer, and the electrode layers are connected to an external power supply. An insulating layer is coated outside the heating film layer. A flow splitting column is fixedly connected in the heating pipe. A heating chamber is formed between an inner side of the heating pipe and the flow splitting column. Flow splitting grooves are formed at left and right ends and on the inner side of the flow splitting column.
    Type: Application
    Filed: December 31, 2020
    Publication date: May 5, 2022
    Inventors: XIN FU, ZHENG LUO, YIWEN ZHAO, SEN DU, NING XU
  • Patent number: 11314982
    Abstract: Systems and methods are disclosed for selecting target objects within digital images. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user indicators to select targeted objects in digital images. Specifically, the disclosed systems and methods can transform user indicators into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: April 26, 2022
    Assignee: Adobe Inc.
    Inventors: Brian Price, Scott Cohen, Ning Xu
  • Patent number: 11315219
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 26, 2022
    Assignee: Snap Inc.
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Publication number: 20220122357
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating a response to a question received from a user during display or playback of a video segment by utilizing a query-response-neural network. The disclosed systems can extract a query vector from a question corresponding to the video segment using the query-response-neural network. The disclosed systems further generate context vectors representing both visual cues and transcript cues corresponding to the video segment using context encoders or other layers from the query-response-neural network. By utilizing additional layers from the query-response-neural network, the disclosed systems generate (i) a query-context vector based on the query vector and the context vectors, and (ii) candidate-response vectors representing candidate responses to the question from a domain-knowledge base or other source.
    Type: Application
    Filed: December 28, 2021
    Publication date: April 21, 2022
    Inventors: Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin
  • Patent number: 11308706
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: April 19, 2022
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 11308628
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system is trained to generate a matte for an input image utilizing contextual information within the image. Patches from the image and a corresponding trimap are extracted, and alpha values for each individual image patch are predicted based on correlations of features in different regions within the image patch. Predicting alpha values for an image patch may also be based on contextual information from other patches extracted from the same image. This contextual information may be determined by determining correlations between features in the query patch and context patches. The predicted alpha values for an image patch form a matte patch, and all matte patches generated for the patches are stitched together to form an overall matte for the input image.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: April 19, 2022
    Assignee: ADOBE INC.
    Inventor: Ning Xu
  • Patent number: 11301725
    Abstract: The present invention discloses a visual relationship detection method based on a region-aware learning mechanism, comprising: acquiring a triplet graph structure and combining features after its aggregation with neighboring nodes, using the features as nodes in a second graph structure, and connecting in accordance with equiprobable edges to form the second graph structure; combining node features of the second graph structure with features of corresponding entity object nodes in the triplet, using the combined features as a visual attention mechanism and merging internal region visual features extracted by two entity objects, and using the merged region visual features as visual features to be used in the next message propagation by corresponding entity object nodes in the triplet; and after a certain number of times of message propagations, combining the output triplet node features and the node features of the second graph structure to infer predicates between object sets.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: April 12, 2022
    Assignee: TIANJIN UNIVERSITY
    Inventors: Anan Liu, Hongshuo Tian, Ning Xu, Weizhi Nie, Dan Song