Patents by Inventor Ning Xu

Ning Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210264236
    Abstract: Embodiments of the present disclosure are directed towards improved models trained using unsupervised domain adaptation. In particular, a style-content adaptation system provides improved translation during unsupervised domain adaptation by controlling the alignment of conditional distributions of a model during training such that content (e.g., a class) from a target domain is correctly mapped to content (e.g., the same class) in a source domain. The style-content adaptation system improves unsupervised domain adaptation using independent control over content (e.g., related to a class) as well as style (e.g., related to a domain) to control alignment when translating between the source and target domain. This independent control over content and style can also allow for images to be generated using the style-content adaptation system that contain desired content and/or style.
    Type: Application
    Filed: February 26, 2020
    Publication date: August 26, 2021
    Inventors: Ning XU, Bayram Safa CICEK, Hailin JIN, Zhaowen WANG
  • Publication number: 20210256708
    Abstract: Techniques are disclosed for deep neural network (DNN) based interactive image matting. A methodology implementing the techniques according to an embodiment includes generating, by the DNN, an alpha matte associated with an image, based on user-specified foreground region locations in the image. The method further includes applying a first DNN subnetwork to the image, the first subnetwork trained to generate a binary mask based on the user input, the binary mask designating pixels of the image as background or foreground. The method further includes applying a second DNN subnetwork to the generated binary mask, the second subnetwork trained to generate a trimap based on the user input, the trimap designating pixels of the image as background, foreground, or uncertain status. The method further includes applying a third DNN subnetwork to the generated trimap, the third subnetwork trained to generate the alpha matte based on the user input.
    Type: Application
    Filed: May 6, 2021
    Publication date: August 19, 2021
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Scott Cohen, Marco Forte, Ning Xu
  • Publication number: 20210248376
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating a response to a question received from a user during display or playback of a video segment by utilizing a query-response-neural network. The disclosed systems can extract a query vector from a question corresponding to the video segment using the query-response-neural network. The disclosed systems further generate context vectors representing both visual cues and transcript cues corresponding to the video segment using context encoders or other layers from the query-response-neural network. By utilizing additional layers from the query-response-neural network, the disclosed systems generate (i) a query-context vector based on the query vector and the context vectors, and (ii) candidate-response vectors representing candidate responses to the question from a domain-knowledge base or other source.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin
  • Patent number: 11088977
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system receives a content message from a first content source, and analyzes the content message to determine one or more quality scores and one or more content values associated with the content message. The server computer system analyzes the content message with a plurality of content collections of the database to identify a match between at least one of the one or more content values and a topic associated with at least a first content collection of the one or more content collections and automatically adds the content message to the first content collection based at least in part on the match. In various embodiments, different content values, image processing operations, and content selection operations are used to curate content collections.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: August 10, 2021
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 11080351
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system receives a plurality of content communications from a plurality of client devices, each content communication comprising an associated piece of content and a corresponding metadata. Each content communication is processed to determine associated context values for each piece of content, each associated context value comprising at least one content value generated by machine vision processing of the associated piece of content. A first content collection is automatically generated based on context values, and a set of user accounts are associated with the collection. An identifier associated with the first content collection is published to user devices associated with user accounts. In various additional embodiments, different content values, image processing operations, and content selection operations are used to curate content collections.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: August 3, 2021
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 11081141
    Abstract: Systems and methods are described for determining a first media item related to an event, of a plurality of stored media items each comprising video content related to the event, that was captured in a device orientation corresponding to a first device orientation detected for the first computing device; providing, to the first computing device, the first media item to be displayed on the first computing device; in response to a detected change to a second device orientation for the first computing device, determining a second media item that was captured in a device orientation corresponding to the second device orientation detected for the first computing device; and providing, to the first computing device, the second media item to be displayed on the first computing device.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: August 3, 2021
    Assignee: Snap Inc.
    Inventors: Jia Li, Nathan Litke, Jose Jesus (Joseph) Paredes, Rahul Bhupendra Sheth, Daniel Szeto, Ning Xu, Jianchao Yang
  • Publication number: 20210216830
    Abstract: Systems, methods, devices, media, and computer readable instructions are described for local image tagging in a resource constrained environment. One embodiment involves processing image data using a deep convolutional neural network (DCNN) comprising at least a first subgraph and a second subgraph, the first subgraph comprising at least a first layer and a second layer, processing, the image data using at least the first layer of the first subgraph to generate first intermediate output data; processing, by the mobile device, the first intermediate output data using at least the second layer of the first subgraph to generate first subgraph output data, and in response to a determination that each layer reliant on the first intermediate data have completed processing, deleting the first intermediate data from the mobile device. Additional embodiments involve convolving entire pixel resolutions of the image data against kernels in different layers if the DCNN.
    Type: Application
    Filed: January 22, 2021
    Publication date: July 15, 2021
    Inventors: Xiaoyu Wang, Ning XU, Ning ZHANG, Victor R. CARVALHO, Jia LI
  • Patent number: 11055828
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: July 6, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
  • Publication number: 20210192274
    Abstract: The present disclosure discloses a visual relationship detection method based on adaptive clustering learning, including: detecting visual objects from an input image and recognizing the visual objects to obtain context representation; embedding the context representation of pair-wise visual objects into a low-dimensional joint subspace to obtain a visual relationship sharing representation; embedding the context representation into a plurality of low-dimensional clustering subspaces, respectively, to obtain a plurality of preliminary visual relationship enhancing representation; and then performing regularization by clustering-driven attention mechanism; fusing the visual relationship sharing representations and regularized visual relationship enhancing representations with a prior distribution over the category label of visual relationship predicate, to predict visual relationship predicates by synthetic relational reasoning.
    Type: Application
    Filed: August 31, 2020
    Publication date: June 24, 2021
    Inventors: Anan LIU, Yanhui WANG, Ning XU, Weizhi NIE
  • Patent number: 11023615
    Abstract: Hosted services provided by service provider tenants to their users are an increasingly common software usage model. The usage of such services and handling of data may be subject to regulatory, legal, and industry-based rules, where different rules may be applicable depending on the particular service, handled data, and organization type, for example. Embodiments are directed to providing intelligence and analysis driven security and compliance suggestions for hosted services to reduce the burden on tenant administrators to determine and implement applicable policies and rules. Claims are directed to determination of a suggestion based on an analysis of a tenant's service environment, presentation of the suggestion along with analysis results and a prompt to confirm implementation of the suggestion, and upon receiving confirmation, presentation of an option to customize the suggestion by modifying settings suggested based on analysis results.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Karissa C. Larson, Churli Su, Wenjie Liang, Binyan Chen, Ben Appleby, Anupama Janardhan, Ning Xu
  • Patent number: 11003638
    Abstract: A method and system for constructing an evolving ontology database. The method includes: receiving a plurality of data entries; calculating semantic similarity scores between any two of the data entries; clustering the data entries into a multiple current themes based on the semantic similarity scores; selecting, new concepts from the current themes by comparing the current themes with a plurality of previous themes prepared using previous data entries; and updating the evolving ontology database using the new concepts. The semantic score between any two of the data entries are calculated by: semantic similarity score=?i=0nsie?j=0kwj×f j, where si is weight of features sources, fj is a feature similarity between the two of the data entries, wj is a weight of fj, and j, k and n are positive integers.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: May 11, 2021
    Assignees: Beijing Jingdong Shangke Information Technology Co., Ltd., JD.com American Technologies Corporation
    Inventors: Shizhu Liu, Kailin Huang, Li Chen, Jianxun Sun, Ning Xu, Chengchong Zhang, Hui Zhou
  • Patent number: 11004208
    Abstract: Techniques are disclosed for deep neural network (DNN) based interactive image matting. A methodology implementing the techniques according to an embodiment includes generating, by the DNN, an alpha matte associated with an image, based on user-specified foreground region locations in the image. The method further includes applying a first DNN subnetwork to the image, the first subnetwork trained to generate a binary mask based on the user input, the binary mask designating pixels of the image as background or foreground. The method further includes applying a second DNN subnetwork to the generated binary mask, the second subnetwork trained to generate a trimap based on the user input, the trimap designating pixels of the image as background, foreground, or uncertain status. The method further includes applying a third DNN subnetwork to the generated trimap, the third subnetwork trained to generate the alpha matte based on the user input.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: May 11, 2021
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Scott Cohen, Marco Forte, Ning Xu
  • Publication number: 20210132718
    Abstract: The present disclosure provides a flexible circuit board. The flexible circuit board includes a substrate; a conductive layer, disposed on the substrate; and a cover layer, disposed on a side of the conductive layer facing away from the substrate. The flexible circuit board is provided with a through hole penetrating through the flexible circuit board in the thickness direction. The cover layer includes a hollowed-out region located at least at an edge of one side of the through hole. The conductive layer includes an electrostatic discharge section exposed in the hollowed-out region.
    Type: Application
    Filed: April 1, 2020
    Publication date: May 6, 2021
    Inventors: Ning XU, Zhihua YU, Tao PENG
  • Patent number: 10979166
    Abstract: A method for avoiding transmission of side information by a Partial Transmit Sequence, comprising the following steps: Step 1: determining an indication sequence of a data sub-carrier and a pilot sub-carrier; Step 2: grouping the frequency domain data blocks including data and pilots to reduce the peak-to-average power ratio (PAPR) of the OFDM signal by phase rotation according to the PTS method. Step 3: processing the pilot of the received signal through channel estimation based on fast Fourier transform interpolation to obtain a frequency domain channel response, and extracting a phase rotation sequence. Step 4: equalizing the received data through the obtained frequency domain channel response. Step 5: performing inverse rotation of phase on the equalized data through the phase rotation information extracted in Step 3 to obtain transmitted data symbols.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: April 13, 2021
    Assignee: THE 28TH RESEARCH INSTITUTE OF CHINA ELECTRONICS TECHNOLOGY GROUP CORPORATION
    Inventors: Xiaoping Shen, Yang Zhou, Xin Ding, Ning Xu
  • Patent number: 10977849
    Abstract: Systems and methods for overlaying a second image/video data onto a first image/video data are described herein. The first image/video data may be intended to be rendered on a display with certain characteristics—e.g., HDR, EDR, VDR or UHD capabilities. The second image/video data may comprise graphics, closed captioning, text, advertisement—or any data that may be desired to be overlaid and/or composited onto the first image/video data. The second image/video data may be appearance mapped according to the image statistics and/or characteristics of the first image/video data. In addition, such appearance mapping may be made according to the characteristics of the display that the composite data is to be rendered. Such appearance mapping is desired to render a composite data that is visually pleasing to a viewer, rendered upon a desired display.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: April 13, 2021
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Timo Kunkel, Ning Xu, Tao Chen, Bongsun Lee, Samir N. Hulyalkar
  • Publication number: 20210103783
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Application
    Filed: November 23, 2020
    Publication date: April 8, 2021
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Patent number: 10956793
    Abstract: Systems, methods, devices, media, and computer readable instructions are described for local image tagging in a resource constrained environment. One embodiment involves processing image data using a deep convolutional neural network (DCNN) comprising at least a first subgraph and a second subgraph, the first subgraph comprising at least a first layer and a second layer, processing, the image data using at least the first layer of the first subgraph to generate first intermediate output data; processing, by the mobile device, the first intermediate output data using at least the second layer of the first subgraph to generate first subgraph output data, and in response to a determination that each layer reliant on the first intermediate data have completed processing, deleting the first intermediate data from the mobile device. Additional embodiments involve convolving entire pixel resolutions of the image data against kernels in different layers if the DCNN.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: March 23, 2021
    Assignee: Snap Inc.
    Inventors: Xiaoyu Wang, Ning Xu, Ning Zhang, Vitor R. Carvalho, Jia Li
  • Publication number: 20210082453
    Abstract: An acoustic environment identification system is disclosed that can use neural networks to accurately identify environments. The acoustic environment identification system can use one or more convolutional neural networks to generate audio feature data. A recursive neural network can process the audio feature data to generate characterization data. The characterization data can he modified using a weighting system that weights signature data items. Classification neural networks can be used to generate a classification of an environment.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 18, 2021
    Inventors: Jinxi Guo, Jia Li, Ning Xu
  • Publication number: 20210073613
    Abstract: A compact neural network system can generate multiple individual filters from a compound filter. Each convolutional layer of a convolutional neural network can include a compound filters used to generate individual filters for that layer. The individual filters overlap in the compound filter and can be extracted using a sampling operation. The extracted individual filters can share weights with nearby filters thereby reducing the overall size of the convolutional neural network.
    Type: Application
    Filed: November 23, 2020
    Publication date: March 11, 2021
    Inventors: Yingzhen Yang, Jianchao Yang, Ning Xu
  • Patent number: 10944938
    Abstract: Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: March 9, 2021
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Ning Xu, James E. Crenshaw, Scott Daly, Samir N. Hulyalkar, Raymond Yeung