Patents by Inventor Walter Chang
Walter Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240160663Abstract: Methods, systems, and apparatuses are described herein for providing a user matching system that matches users based on interests represented in images. A user might upload a first image. Based on determining that the image does not contain human faces, the computing device may determine first keywords associated with the first image. The user might be presented a profile of a second user based on a comparison of the first keywords and second keywords associated with the profile of the second user. Those second keywords might also have been determined by processing second images of the profile of the second user. Each user's profile may be represented as a collage of interest images without containing images of the user, thereby preserving privacy and focusing users on non-superficial topics.Type: ApplicationFiled: September 27, 2023Publication date: May 16, 2024Inventors: See Gwan Ho, Eric Walter Chang
-
Patent number: 11967128Abstract: The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.Type: GrantFiled: May 28, 2021Date of Patent: April 23, 2024Assignee: ADOBE INC.Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter Chang
-
Patent number: 11941508Abstract: The present disclosure describes systems and methods for dialog processing and information retrieval. Embodiments of the present disclosure provide a dialog system (e.g., a task-oriented dialog system) with adaptive recurrent hopping and dual context encoding to receive and understand a natural language query from a user, manage dialog based on natural language conversation, and generate natural language responses. For example, a memory network can employ a memory recurrent neural net layer and a decision meta network (e.g., a subnet) to determine an adaptive number of memory hops for obtaining readouts from a knowledge base. Further, in some embodiments, a memory network uses a dual context encoder to encode information from original context and canonical context using parallel encoding layers.Type: GrantFiled: February 26, 2021Date of Patent: March 26, 2024Assignee: ADOBE INC.Inventors: Quan Tran, Franck Dernoncourt, Walter Chang
-
Patent number: 11893345Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.Type: GrantFiled: April 6, 2021Date of Patent: February 6, 2024Assignee: ADOBE, INC.Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
-
Patent number: 11755570Abstract: The present disclosure provides a memory-based neural network for question answering. Embodiments of the disclosure identify meta-evidence nodes in an embedding space, where the meta-evidence nodes represent salient features of a training set. Each element of the training set may include a questions appended to a ground truth answer. The training set may also include questions with wrong answers that are indicated as such. In some examples, a neural Turing machine (NTM) reads a dataset and summarizes the dataset into a few meta-evidence nodes. A subsequent question may be appended to multiple candidate answers to form an input phrase, which may also be embedded in the embedding space. Then, corresponding weights may be identified for each of the meta-evidence nodes. The embedded input phrase and the weighted meta-evidence nodes may be used to identify the most appropriate answer.Type: GrantFiled: December 9, 2020Date of Patent: September 12, 2023Assignee: ADOBE, INC.Inventors: Quan Tran, Walter Chang, Franck Dernoncourt
-
Patent number: 11657802Abstract: The present disclosure relates to generating digital responses based on digital dialog states generated by a neural network having a dynamic memory network architecture. For example, in one or more embodiments, the disclosed system provides a digital dialog having one or more segments to a dialog state tracking neural network having a dynamic memory network architecture that includes a set of multiple memory slots. In some embodiments, the dialog state tracking neural network further includes update gates and reset gates used in modifying the values stored in the memory slots. For instance, the disclosed system can utilize cross-slot interaction update/reset gates to accurately generate a digital dialog state for each of the segments of digital dialog. Subsequently, the system generates a digital response for each segment of digital dialog based on the digital dialog state.Type: GrantFiled: December 28, 2020Date of Patent: May 23, 2023Assignee: Adobe Inc.Inventors: Seokhwan Kim, Walter Chang
-
Patent number: 11630952Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can classify term sequences within a source text based on textual features analyzed by both an implicit-class-recognition model and an explicit-class-recognition model. For example, by applying machine-learning models for both implicit and explicit class recognition, the disclosed systems can determine a class corresponding to a particular term sequence within a source text and identify the particular term sequence reflecting the class. The dual-model architecture can equip the disclosed systems to apply (i) the implicit-class-recognition model to recognize implicit references to a class in source texts and (ii) the explicit-class-recognition model to recognize explicit references to the same class in source texts.Type: GrantFiled: July 22, 2019Date of Patent: April 18, 2023Assignee: Adobe Inc.Inventors: Sean MacAvaney, Franck Dernoncourt, Walter Chang, Seokhwan Kim, Doo Soon Kim, Chen Fang
-
Patent number: 11620457Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.Type: GrantFiled: February 17, 2021Date of Patent: April 4, 2023Assignee: ADOBE INC.Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
-
Patent number: 11594077Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images based on verbal and/or gesture input by utilizing a natural language processing neural network and one or more computer vision neural networks. The disclosed systems can receive verbal input together with gesture input. The disclosed systems can further utilize a natural language processing neural network to generate a verbal command based on verbal input. The disclosed systems can select a particular computer vision neural network based on the verbal input and/or the gesture input. The disclosed systems can apply the selected computer vision neural network to identify pixels within a digital image that correspond to an object indicated by the verbal input and/or gesture input. Utilizing the identified pixels, the disclosed systems can generate a modified digital image by performing one or more editing actions indicated by the verbal input and/or gesture input.Type: GrantFiled: September 18, 2020Date of Patent: February 28, 2023Assignee: Adobe Inc.Inventors: Trung Bui, Zhe Lin, Walter Chang, Nham Le, Franck Dernoncourt
-
Patent number: 11544456Abstract: Systems and methods for parsing natural language sentences using an artificial neural network (ANN) are described. Embodiments of the described systems and methods may generate a plurality of word representation matrices for an input sentence, wherein each of the word representation matrices is based on an input matrix of word vectors, a query vector, a matrix of key vectors, and a matrix of value vectors, and wherein a number of the word representation matrices is based on a number of syntactic categories, compress each of the plurality of word representation matrices to produce a plurality of compressed word representation matrices, concatenate the plurality of compressed word representation matrices to produce an output matrix of word vectors, and identify at least one word from the input sentence corresponding to a syntactic category based on the output matrix of word vectors.Type: GrantFiled: March 5, 2020Date of Patent: January 3, 2023Assignee: ADOBE INC.Inventors: Khalil Mrini, Walter Chang, Trung Bui, Quan Tran, Franck Dernoncourt
-
Publication number: 20220414338Abstract: System and methods for a text summarization system are described. In one example, a text summarization system receives an input utterance and determines whether the utterance should be included in a summary of the text. The text summarization system includes an embedding network, a convolution network, an encoding component, and a summary component. The embedding network generates a semantic embedding of an utterance. The convolution network generates a plurality of feature vectors based on the semantic embedding. The encoding component identifies a plurality of latent codes respectively corresponding to the plurality of feature vectors. The summary component identifies a prominent code among the latent codes and to select the utterance as a summary utterance based on the prominent code.Type: ApplicationFiled: June 29, 2021Publication date: December 29, 2022Inventors: SANGWOO CHO, Franck Dernoncourt, Timothy Jeewun Ganter, Trung Huu Bui, Nedim Lipka, Varun Manjunatha, Walter Chang, Hailin Jin, Jonathan Brandt
-
Publication number: 20220383031Abstract: The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.Type: ApplicationFiled: May 28, 2021Publication date: December 1, 2022Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter Chang
-
Publication number: 20220318505Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.Type: ApplicationFiled: April 6, 2021Publication date: October 6, 2022Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
-
Publication number: 20220277186Abstract: The present disclosure describes systems and methods for dialog processing and information retrieval. Embodiments of the present disclosure provide a dialog system (e.g., a task-oriented dialog system) with adaptive recurrent hopping and dual context encoding to receive and understand a natural language query from a user, manage dialog based on natural language conversation, and generate natural language responses. For example, a memory network can employ a memory recurrent neural net layer and a decision meta network (e.g., a subnet) to determine an adaptive number of memory hops for obtaining readouts from a knowledge base. Further, in some embodiments, a memory network uses a dual context encoder to encode information from original context and canonical context using parallel encoding layers.Type: ApplicationFiled: February 26, 2021Publication date: September 1, 2022Inventors: QUAN TRAN, Franck Dernoncourt, Walter Chang
-
Publication number: 20220261555Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.Type: ApplicationFiled: February 17, 2021Publication date: August 18, 2022Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
-
Publication number: 20220179848Abstract: The present disclosure provides a memory-based neural network for question answering. Embodiments of the disclosure identify meta-evidence nodes in an embedding space, where the meta-evidence nodes represent salient features of a training set. Each element of the training set may include a questions appended to a ground truth answer. The training set may also include questions with wrong answers that are indicated as such. In some examples, a neural Turing machine (NTM) reads a dataset and summarizes the dataset into a few meta-evidence nodes. A subsequent question may be appended to multiple candidate answers to form an input phrase, which may also be embedded in the embedding space. Then, corresponding weights may be identified for each of the meta-evidence nodes. The embedded input phrase and the weighted meta-evidence nodes may be used to identify the most appropriate answer.Type: ApplicationFiled: December 9, 2020Publication date: June 9, 2022Inventors: QUAN TRAN, Walter Chang, Franck Dernoncourt
-
Publication number: 20220138185Abstract: Systems and methods for natural language processing are described. Embodiments are configured to receive a structured representation of a search query, wherein the structured representation comprises a plurality of nodes and at least one edge connecting two of the nodes, receive a modification expression for the search query, wherein the modification expression comprises a natural language expression, generate a modified structured representation based on the structured representation and the modification expression using a neural network configured to combine structured representation features and natural language expression features, and perform a search based on the modified structured representation.Type: ApplicationFiled: November 3, 2020Publication date: May 5, 2022Inventors: Quan Tran, Zhe Lin, Xuanli He, Walter Chang, Trung Bui, Franck Dernoncourt
-
Publication number: 20220050967Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that extract a definition for a term from a source document by utilizing a single machine-learning framework to classify a word sequence from the source document as including a term definition and to label words from the word sequence. To illustrate, the disclosed system can receive a source document including a word sequence arranged in one or more sentences. The disclosed systems can utilize a machine-learning model to classify the word sequence as comprising a definition for a term and generate labels for the words from the word sequence corresponding to the term and the definition. Based on classifying the word sequence and the generated labels, the disclosed system can extract the definition for the term from the source document.Type: ApplicationFiled: August 11, 2020Publication date: February 17, 2022Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Yiming Yang, Lidan Wang, Rajiv Jain, Vlad Morariu, Walter Chang
-
Patent number: 11232255Abstract: Systems, methods, and non-transitory computer-readable media are disclosed that collect and analyze annotation performance data to generate digital annotations for evaluating and training automatic electronic document annotation models. In particular, in one or more embodiments, the disclosed systems provide electronic documents to annotators based on annotator topic preferences. The disclosed systems then identify digital annotations and annotation performance data such as a time period spent by an annotator in generating digital annotations and annotator responses to digital annotation questions. Furthermore, in one or more embodiments, the disclosed systems utilize the identified digital annotations and the annotation performance data to generate a final set of reliable digital annotations. Additionally, in one or more embodiments, the disclosed systems provide the final set of digital annotations for utilization in training a machine learning model to generate annotations for electronic documents.Type: GrantFiled: June 13, 2018Date of Patent: January 25, 2022Assignee: Adobe Inc.Inventors: Franck Dernoncourt, Walter Chang, Trung Bui, Sean Fitzgerald, Sasha Spala, Kishore Aradhya, Carl Dockhorn
-
Patent number: 11080295Abstract: Techniques for organizing knowledge about a dataset storing data from or about multiple sources may be provided. For example, the data can be accessed from the multiple sources and categorized based on the data type. For each data type, a triple extraction technique specific to that data type may be invoked. One set of techniques can allow the extraction of triples from the data based on natural language-based rules. Another set of techniques can allow a similar extraction based on logical or structural-based rules. A triple may store a relationship between elements of the data. The extracted triples can be stored with corresponding identifiers in a list. Further, dictionaries storing associations between elements of the data and the triples can be updated. The list and the dictionaries can be used to return triples in response to a query that specifies one or more elements.Type: GrantFiled: November 11, 2014Date of Patent: August 3, 2021Assignee: Adobe Inc.Inventors: Walter Chang, Nicholas Digiuseppe