Patents by Inventor Jiuxiang Gu

Jiuxiang Gu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11995394
    Abstract: Systems and methods for document editing are provided. One aspect of the systems and methods includes obtaining a document and a natural language edit request. Another aspect of the systems and methods includes generating a structured edit command using a machine learning model based on the document and the natural language edit request. Yet another aspect of the systems and methods includes generating a modified document based on the document and the structured edit command, where the modified document includes a revision of the document that incorporates the natural language edit request.
    Type: Grant
    Filed: February 7, 2023
    Date of Patent: May 28, 2024
    Assignee: ADOBE INC.
    Inventors: Vlad Ion Morariu, Puneet Mathur, Rajiv Bhawanji Jain, Jiuxiang Gu, Franck Dernoncourt
  • Publication number: 20240161529
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate a digital document hierarchy comprising layers of parent-child element relationships from the visual elements. For example, for a layer of the layers, the disclosed systems determine, from the visual elements, candidate parent visual elements and child visual elements. In addition, for the layer of the layers, the disclosed systems generate, from the feature embeddings utilizing a neural network, element classifications for the candidate parent visual elements and parent-child element link probabilities for the candidate parent visual elements and the child visual elements. Moreover, for the layer, the disclosed systems select parent visual elements from the candidate parent visual elements based on the parent-child element link probabilities. Further, the disclosed systems utilize the digital document hierarchy to generate an interactive digital document from the digital document image.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Vlad Morariu, Puneet Mathur, Rajiv Jain, Ashutosh Mehra, Jiuxiang Gu, Franck Dernoncourt, Anandhavelu N, Quan Tran, Verena Kaynig-Fittkau, Nedim Lipka, Ani Nenkova
  • Publication number: 20240135103
    Abstract: In implementations of systems for training language models and preserving privacy, a computing device implements a privacy system to predict a next word after a last word in a sequence of words by processing input data using a machine learning model trained on training data to predict next words after last words in sequences of words. The training data describes a corpus of text associated with clients and including sensitive samples and non-sensitive samples. The machine learning model is trained by sampling a client of the clients and using a subset of the sensitive samples associated with the client and a subset of the non-sensitive samples associated with the client to update parameters of the machine learning model. The privacy system generates an indication of the next word after the last word in the sequence of words for display in a user interface.
    Type: Application
    Filed: February 23, 2023
    Publication date: April 25, 2024
    Applicant: Adobe Inc.
    Inventors: Franck Dernoncourt, Tong Sun, Thi kim phung Lai, Rajiv Bhawanji Jain, Nikolaos Barmpalios, Jiuxiang Gu
  • Publication number: 20240135096
    Abstract: Systems and methods for document classification are described. Embodiments of the present disclosure generate classification data for a plurality of samples using a neural network trained to identify a plurality of known classes; select a set of samples for annotation from the plurality of samples using an open-set metric based on the classification data, wherein the annotation includes an unknown class; and train the neural network to identify the unknown class based on the annotation of the set of samples.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Inventors: Rajiv Bhawanji Jain, Michelle Yuan, Vlad Ion Morariu, Ani Nenkova Nenkova, Smitha Bangalore Naresh, Nikolaos Barmpalios, Ruchi Deshpande, Ruiyi Zhang, Jiuxiang Gu, Varun Manjunatha, Nedim Lipka, Andrew Marc Greene
  • Publication number: 20240104951
    Abstract: In various examples, a table recognition model receives an image of a table and generates, using a first encoder of the table recognition machine learning model, an image feature vector including features extracted from the image of the table; generates, using a first decoder of the table recognition machine learning model and the image feature vector, a set of coordinates within the image representing rows and columns associated with the table, and generates, using a second decoder of the table recognition machine learning model and the image feature vector, a set of bounding boxes and semantic features associated with cells the table, then determines, using a third decoder of the table recognition machine learning model, a table structure associated with the table using the image feature vector, the set of coordinates, the set of bounding boxes, and the semantic features.
    Type: Application
    Filed: September 19, 2022
    Publication date: March 28, 2024
    Inventors: Jiuxiang Gu, Vlad Morariu, Tong Sun, Jason wen yong Kuen, Ani Nenkova
  • Patent number: 11886815
    Abstract: One example method involves operations for a processing device that include receiving, by a machine learning model trained to generate a search result, a search query for a text input. The machine learning model is trained by receiving pre-training data that includes multiple documents. Pre-training the machine learning model by generating, using an encoder, feature embeddings for each of the documents included in the pre-training data. The feature embeddings are generated by applying a masking function to visual and textual features in the documents. Training the machine learning model also includes generating, using the feature embeddings, output features for the documents by concatenating the feature embeddings and applying a non-linear mapping to the feature embeddings. Training the machine learning model further includes applying a linear classifier to the output features. Additionally, operations include generating, for display, a search result using the machine learning model based on the input.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Jiuxiang Gu, Vlad Morariu, Varun Manjunatha, Tong Sun, Rajiv Jain, Peizhao Li, Jason Kuen, Handong Zhao
  • Publication number: 20230401827
    Abstract: Systems and methods for image segmentation are described. Embodiments of the present disclosure receive a training image and a caption for the training image, wherein the caption includes text describing an object in the training image; generate a pseudo mask for the object using a teacher network based on the text describing the object; generate a mask for the object using a student network; compute noise information for the training image using a noise estimation network; and update parameters of the student network based on the mask, the pseudo mask, and the noise information.
    Type: Application
    Filed: June 9, 2022
    Publication date: December 14, 2023
    Inventors: Jason Wen Yong Kuen, Dat Ba Huynh, Zhe Lin, Jiuxiang Gu
  • Publication number: 20230376687
    Abstract: Embodiments are provided for facilitating multimodal extraction across multiple granularities. In one implementation, a set of features of a document for a plurality of granularities of the document is obtained. Via a machine learning model, the set of features of the document are modified to generate a set of modified features using a set of self-attention values to determine relationships within a first type of feature and a set of cross-attention values to determine relationships between the first type of feature and a second type of feature. Thereafter, the set of modified features are provided to a second machine learning model to perform a classification task.
    Type: Application
    Filed: May 17, 2022
    Publication date: November 23, 2023
    Inventors: Vlad Ion Morariu, Tong Sun, Nikolaos Barmpalios, Zilong Wang, Jiuxiang Gu, Ani Nenkova Nenkova, Christopher Tensmeyer
  • Publication number: 20230376828
    Abstract: Systems and methods for product retrieval are described. One or more aspects of the systems and methods include receiving a query that includes a text description of a product associated with a brand; identifying the product based on the query by comparing the text description to a product embedding of the product, wherein the product embedding is based on a brand embedding of the brand; and displaying product information for the product in response to the query, wherein the product information includes the brand.
    Type: Application
    Filed: May 19, 2022
    Publication date: November 23, 2023
    Inventors: Handong Zhao, Haoyu Ma, Zhe Lin, Ajinkya Gorakhnath Kale, Tong Yu, Jiuxiang Gu, Sunav Choudhary, Venkata Naveen Kumar Yadav Marri
  • Publication number: 20230368003
    Abstract: The technology described herein is directed to an adaptive sparse attention pattern that is learned during fine-tuning and deployed in a machine-learning model. In aspects, a row or a column in an attention matrix with an importance score for a task that is above a threshold importance score is identified. The important row or the column is included in an adaptive attention pattern used with a machine-learning model having a self-attention operation. In response to an input, a task-specific inference is generated for the input using the machine-learning model with the adaptive attention pattern.
    Type: Application
    Filed: May 10, 2022
    Publication date: November 16, 2023
    Inventors: Jiuxiang Gu, Zihan Wang, Jason Wen Yong Kuen, Handong Zhao, Vlad Ion Morariu, Ruiyi Zhang, Ani Nenkova Nenkova, Tong Sun
  • Patent number: 11816243
    Abstract: Systems, methods, and non-transitory computer-readable media can generate a natural language model that provides user-entity differential privacy. For example, in one or more embodiments, a system samples sensitive data points from a natural language dataset. Using the sampled sensitive data points, the system determines gradient values corresponding to the natural language model. Further, the system generates noise for the natural language model. The system generates parameters for the natural language model using the gradient values and the noise, facilitating simultaneous protection of the users and sensitive entities associated with the natural language dataset. In some implementations, the system generates the natural language model through an iterative process (e.g., by iteratively modifying the parameters).
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: November 14, 2023
    Assignee: Adobe Inc.
    Inventors: Thi Kim Phung Lai, Tong Sun, Rajiv Jain, Nikolaos Barmpalios, Jiuxiang Gu, Franck Dernoncourt
  • Publication number: 20230252774
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive a training image and a caption for the training image, wherein the caption includes text describing an object in the training image; generate a pseudo mask for the object using a teacher network based on the text describing the object; generate a mask for the object using a student network; and update parameters of the student network based on the mask and the pseudo mask.
    Type: Application
    Filed: February 9, 2022
    Publication date: August 10, 2023
    Inventors: Jason Wen Yong Kuen, Dat Ba Huynh, Zhe Lin, Jiuxiang Gu
  • Publication number: 20230230406
    Abstract: Methods and systems are provided for facilitating identification of fillable regions and/or data associated therewith. In embodiments, a candidate fillable region indicating a region in a form that is a candidate for being fillable is obtained. Textual context indicating text from the form and spatial context indicating positions of the text within the form are also obtained. Fillable region data associated with the candidate fillable region is generated, via a machine learning model, using the candidate fillable region, the textual context, and the spatial context. Thereafter, a fillable form is generated using the fillable region data, the fillable form having one or more fillable regions for accepting input.
    Type: Application
    Filed: January 18, 2022
    Publication date: July 20, 2023
    Inventors: Ashutosh Mehra, Christopher Alan Tensmeyer, Vlad Ion Morariu, Jiuxiang Gu
  • Publication number: 20230230198
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a neural network framework for interactive multi-round image generation from natural language inputs. Specifically, the disclosed systems provide an intelligent framework (i.e., a text-based interactive image generation model) that facilitates a multi-round image generation and editing workflow that comports with arbitrary input text and synchronous interaction. In particular embodiments, the disclosed systems utilize natural language feedback for conditioning a generative neural network that performs text-to-image generation and text-guided image modification. For example, the disclosed systems utilize a trained model to inject textual features from natural language feedback into a unified joint embedding space for generating text-informed style vectors. In turn, the disclosed systems can generate an image with semantically meaningful features that map to the natural language feedback.
    Type: Application
    Filed: January 14, 2022
    Publication date: July 20, 2023
    Inventors: Ruiyi Zhang, Yufan Zhou, Christopher Tensmeyer, Jiuxiang Gu, Tong Yu, Tong Sun
  • Publication number: 20230153943
    Abstract: Systems and methods for image processing are described. The systems and methods include receiving a low-resolution image; generating a feature map based on the low-resolution image using an encoder of a student network, wherein the encoder of the student network is trained based on comparing a predicted feature map from the encoder of the student network and a fused feature map from a teacher network, and wherein the fused feature map represents a combination of first feature map from a high-resolution encoder of the teacher network and a second feature map from a low-resolution encoder of the teacher network; and decoding the feature map to obtain prediction information for the low-resolution image.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Inventors: Jason Kuen, Jiuxiang Gu, Zhe Lin
  • Publication number: 20230153531
    Abstract: Systems and methods for performing Document Visual Question Answering tasks are described. A document and query are received. The document encodes document tokens and the query encodes query tokens. The document is segmented into nested document sections, lines, and tokens. A nested structure of tokens is generated based on the segmented document. A feature vector for each token is generated. A graph structure is generated based on the nested structure of tokens. Each graph node corresponds to the query, a document section, a line, or a token. The node connections correspond to the nested structure. Each node is associated with the feature vector for the corresponding object. A graph attention network is employed to generate another embedding for each node. These embeddings are employed to identify a portion of the document that includes a response to the query. An indication of the identified portion of the document is be provided.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 18, 2023
    Inventors: Shijie Geng, Christopher Tensmeyer, Curtis Michael Wigington, Jiuxiang Gu
  • Publication number: 20230154221
    Abstract: The technology described includes methods for pretraining a document encoder model based on multimodal self cross-attention. One method includes receiving image data that encodes a set of pretraining documents. A set of sentences is extracted from the image data. A bounding box for each sentence is generated. For each sentence, a set of predicted features is generated by using an encoder machine-learning model. The encoder model performs cross-attention between a set of masked-textual features for the sentence and a set of masked-visual features for the sentence. The set of masked-textual features is based on a masking function and the sentence. The set of masked-visual features is based on the masking function and the corresponding bounding box. A document-encoder model is pretrained based on the set of predicted features for each sentence and pretraining tasks. The pretraining tasks includes masked sentence modeling, visual contrastive learning, or visual-language alignment.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Inventors: Jiuxiang Gu, Ani Nenkova Nenkova, Nikolaos Barmpalios, Vlad Ion Morariu, Tong Sun, Rajiv Bhawanji Jain, Jason wen yong Kuen, Handong Zhao
  • Patent number: 11610393
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently learning parameters of a distilled neural network from parameters of a source neural network utilizing multiple augmentation strategies. For example, the disclosed systems can generate lightly augmented digital images and heavily augmented digital images. The disclosed systems can further learn parameters for a source neural network from the lightly augmented digital images. Moreover, the disclosed systems can learn parameters for a distilled neural network from the parameters learned for the source neural network. For example, the disclosed systems can compare classifications of heavily augmented digital images generated by the source neural network and the distilled neural network to transfer learned parameters from the source neural network to the distilled neural network via a knowledge distillation loss function.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: March 21, 2023
    Assignee: Adobe Inc.
    Inventors: Jason Wen Yong Kuen, Zhe Lin, Jiuxiang Gu
  • Publication number: 20230059367
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate a natural language model that provides user-entity differential privacy. For example, in one or more embodiments, the disclosed systems sample sensitive data points from a natural language dataset. Using the sampled sensitive data points, the disclosed systems determine gradient values corresponding to the natural language model. Further, the disclosed systems generate noise for the natural language model. The disclosed systems generate parameters for the natural language model using the gradient values and the noise, facilitating simultaneous protection of the users and sensitive entities associated with the natural language dataset. In some implementations, the disclosed systems generate the natural language model through an iterative process (e.g., by iteratively modifying the parameters).
    Type: Application
    Filed: August 9, 2021
    Publication date: February 23, 2023
    Inventors: Thi Kim Phung Lai, Tong Sun, Rajiv Jain, Nikolaos Barmpalios, Jiuxiang Gu, Franck Dernoncourt
  • Publication number: 20220382975
    Abstract: One example method involves operations for a processing device that include receiving, by a machine learning model trained to generate a search result, a search query for a text input. The machine learning model is trained by receiving pre-training data that includes multiple documents. Pre-training the machine learning model by generating, using an encoder, feature embeddings for each of the documents included in the pre-training data. The feature embeddings are generated by applying a masking function to visual and textual features in the documents. Training the machine learning model also includes generating, using the feature embeddings, output features for the documents by concatenating the feature embeddings and applying a non-linear mapping to the feature embeddings. Training the machine learning model further includes applying a linear classifier to the output features. Additionally, operations include generating, for display, a search result using the machine learning model based on the input.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Inventors: Jiuxiang Gu, Vlad Morariu, Varun Manjunatha, Tong Sun, Rajiv Jain, Peizhao Li, Jason Kuen, Handong Zhao