Patents by Inventor Walter Chang

Walter Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967128
    Abstract: The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: April 23, 2024
    Assignee: ADOBE INC.
    Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter Chang
  • Publication number: 20240124897
    Abstract: Compositions and methods for editing, e.g., introducing double-stranded breaks, within the TTR gene are provided. Compositions and methods for treating subjects having amyloidosis associated with transthyretin (ATTR), are provided.
    Type: Application
    Filed: July 31, 2023
    Publication date: April 18, 2024
    Applicant: Intellia Therapeutics, Inc.
    Inventors: Yong Chang, Seth C. Alexander, Kristy M. Wood, Arti Mahendra Prakash Kanjolia, Shobu Odate, Jessica Lynn Seitzer, Reynald Michael Lescarbeau, Walter Strapps
  • Patent number: 11941508
    Abstract: The present disclosure describes systems and methods for dialog processing and information retrieval. Embodiments of the present disclosure provide a dialog system (e.g., a task-oriented dialog system) with adaptive recurrent hopping and dual context encoding to receive and understand a natural language query from a user, manage dialog based on natural language conversation, and generate natural language responses. For example, a memory network can employ a memory recurrent neural net layer and a decision meta network (e.g., a subnet) to determine an adaptive number of memory hops for obtaining readouts from a knowledge base. Further, in some embodiments, a memory network uses a dual context encoder to encode information from original context and canonical context using parallel encoding layers.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: March 26, 2024
    Assignee: ADOBE INC.
    Inventors: Quan Tran, Franck Dernoncourt, Walter Chang
  • Patent number: 11893345
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE, INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Patent number: 11755570
    Abstract: The present disclosure provides a memory-based neural network for question answering. Embodiments of the disclosure identify meta-evidence nodes in an embedding space, where the meta-evidence nodes represent salient features of a training set. Each element of the training set may include a questions appended to a ground truth answer. The training set may also include questions with wrong answers that are indicated as such. In some examples, a neural Turing machine (NTM) reads a dataset and summarizes the dataset into a few meta-evidence nodes. A subsequent question may be appended to multiple candidate answers to form an input phrase, which may also be embedded in the embedding space. Then, corresponding weights may be identified for each of the meta-evidence nodes. The embedded input phrase and the weighted meta-evidence nodes may be used to identify the most appropriate answer.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: September 12, 2023
    Assignee: ADOBE, INC.
    Inventors: Quan Tran, Walter Chang, Franck Dernoncourt
  • Patent number: 11657802
    Abstract: The present disclosure relates to generating digital responses based on digital dialog states generated by a neural network having a dynamic memory network architecture. For example, in one or more embodiments, the disclosed system provides a digital dialog having one or more segments to a dialog state tracking neural network having a dynamic memory network architecture that includes a set of multiple memory slots. In some embodiments, the dialog state tracking neural network further includes update gates and reset gates used in modifying the values stored in the memory slots. For instance, the disclosed system can utilize cross-slot interaction update/reset gates to accurately generate a digital dialog state for each of the segments of digital dialog. Subsequently, the system generates a digital response for each segment of digital dialog based on the digital dialog state.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: May 23, 2023
    Assignee: Adobe Inc.
    Inventors: Seokhwan Kim, Walter Chang
  • Patent number: 11630952
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can classify term sequences within a source text based on textual features analyzed by both an implicit-class-recognition model and an explicit-class-recognition model. For example, by applying machine-learning models for both implicit and explicit class recognition, the disclosed systems can determine a class corresponding to a particular term sequence within a source text and identify the particular term sequence reflecting the class. The dual-model architecture can equip the disclosed systems to apply (i) the implicit-class-recognition model to recognize implicit references to a class in source texts and (ii) the explicit-class-recognition model to recognize explicit references to the same class in source texts.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: April 18, 2023
    Assignee: Adobe Inc.
    Inventors: Sean MacAvaney, Franck Dernoncourt, Walter Chang, Seokhwan Kim, Doo Soon Kim, Chen Fang
  • Patent number: 11620457
    Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: April 4, 2023
    Assignee: ADOBE INC.
    Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
  • Patent number: 11594077
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images based on verbal and/or gesture input by utilizing a natural language processing neural network and one or more computer vision neural networks. The disclosed systems can receive verbal input together with gesture input. The disclosed systems can further utilize a natural language processing neural network to generate a verbal command based on verbal input. The disclosed systems can select a particular computer vision neural network based on the verbal input and/or the gesture input. The disclosed systems can apply the selected computer vision neural network to identify pixels within a digital image that correspond to an object indicated by the verbal input and/or gesture input. Utilizing the identified pixels, the disclosed systems can generate a modified digital image by performing one or more editing actions indicated by the verbal input and/or gesture input.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: February 28, 2023
    Assignee: Adobe Inc.
    Inventors: Trung Bui, Zhe Lin, Walter Chang, Nham Le, Franck Dernoncourt
  • Patent number: 11544456
    Abstract: Systems and methods for parsing natural language sentences using an artificial neural network (ANN) are described. Embodiments of the described systems and methods may generate a plurality of word representation matrices for an input sentence, wherein each of the word representation matrices is based on an input matrix of word vectors, a query vector, a matrix of key vectors, and a matrix of value vectors, and wherein a number of the word representation matrices is based on a number of syntactic categories, compress each of the plurality of word representation matrices to produce a plurality of compressed word representation matrices, concatenate the plurality of compressed word representation matrices to produce an output matrix of word vectors, and identify at least one word from the input sentence corresponding to a syntactic category based on the output matrix of word vectors.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: January 3, 2023
    Assignee: ADOBE INC.
    Inventors: Khalil Mrini, Walter Chang, Trung Bui, Quan Tran, Franck Dernoncourt
  • Publication number: 20220414338
    Abstract: System and methods for a text summarization system are described. In one example, a text summarization system receives an input utterance and determines whether the utterance should be included in a summary of the text. The text summarization system includes an embedding network, a convolution network, an encoding component, and a summary component. The embedding network generates a semantic embedding of an utterance. The convolution network generates a plurality of feature vectors based on the semantic embedding. The encoding component identifies a plurality of latent codes respectively corresponding to the plurality of feature vectors. The summary component identifies a prominent code among the latent codes and to select the utterance as a summary utterance based on the prominent code.
    Type: Application
    Filed: June 29, 2021
    Publication date: December 29, 2022
    Inventors: SANGWOO CHO, Franck Dernoncourt, Timothy Jeewun Ganter, Trung Huu Bui, Nedim Lipka, Varun Manjunatha, Walter Chang, Hailin Jin, Jonathan Brandt
  • Publication number: 20220383031
    Abstract: The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Inventors: Qiuyu Chen, Quan Hung Tran, Kushal Kafle, Trung Huu Bui, Franck Dernoncourt, Walter Chang
  • Publication number: 20220318505
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Application
    Filed: April 6, 2021
    Publication date: October 6, 2022
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Publication number: 20220277186
    Abstract: The present disclosure describes systems and methods for dialog processing and information retrieval. Embodiments of the present disclosure provide a dialog system (e.g., a task-oriented dialog system) with adaptive recurrent hopping and dual context encoding to receive and understand a natural language query from a user, manage dialog based on natural language conversation, and generate natural language responses. For example, a memory network can employ a memory recurrent neural net layer and a decision meta network (e.g., a subnet) to determine an adaptive number of memory hops for obtaining readouts from a knowledge base. Further, in some embodiments, a memory network uses a dual context encoder to encode information from original context and canonical context using parallel encoding layers.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: QUAN TRAN, Franck Dernoncourt, Walter Chang
  • Publication number: 20220261555
    Abstract: Systems and methods for sentence fusion are described. Embodiments receive coreference information for a first sentence and a second sentence, wherein the coreference information identifies entities associated with both a term of the first sentence and a term of the second sentence, apply an entity constraint to an attention head of a sentence fusion network, wherein the entity constraint limits attention weights of the attention head to terms that correspond to a same entity of the coreference information, and predict a fused sentence using the sentence fusion network based on the entity constraint, wherein the fused sentence combines information from the first sentence and the second sentence.
    Type: Application
    Filed: February 17, 2021
    Publication date: August 18, 2022
    Inventors: Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang
  • Publication number: 20220179848
    Abstract: The present disclosure provides a memory-based neural network for question answering. Embodiments of the disclosure identify meta-evidence nodes in an embedding space, where the meta-evidence nodes represent salient features of a training set. Each element of the training set may include a questions appended to a ground truth answer. The training set may also include questions with wrong answers that are indicated as such. In some examples, a neural Turing machine (NTM) reads a dataset and summarizes the dataset into a few meta-evidence nodes. A subsequent question may be appended to multiple candidate answers to form an input phrase, which may also be embedded in the embedding space. Then, corresponding weights may be identified for each of the meta-evidence nodes. The embedded input phrase and the weighted meta-evidence nodes may be used to identify the most appropriate answer.
    Type: Application
    Filed: December 9, 2020
    Publication date: June 9, 2022
    Inventors: QUAN TRAN, Walter Chang, Franck Dernoncourt
  • Publication number: 20220138185
    Abstract: Systems and methods for natural language processing are described. Embodiments are configured to receive a structured representation of a search query, wherein the structured representation comprises a plurality of nodes and at least one edge connecting two of the nodes, receive a modification expression for the search query, wherein the modification expression comprises a natural language expression, generate a modified structured representation based on the structured representation and the modification expression using a neural network configured to combine structured representation features and natural language expression features, and perform a search based on the modified structured representation.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Quan Tran, Zhe Lin, Xuanli He, Walter Chang, Trung Bui, Franck Dernoncourt
  • Publication number: 20220050967
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that extract a definition for a term from a source document by utilizing a single machine-learning framework to classify a word sequence from the source document as including a term definition and to label words from the word sequence. To illustrate, the disclosed system can receive a source document including a word sequence arranged in one or more sentences. The disclosed systems can utilize a machine-learning model to classify the word sequence as comprising a definition for a term and generate labels for the words from the word sequence corresponding to the term and the definition. Based on classifying the word sequence and the generated labels, the disclosed system can extract the definition for the term from the source document.
    Type: Application
    Filed: August 11, 2020
    Publication date: February 17, 2022
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Yiming Yang, Lidan Wang, Rajiv Jain, Vlad Morariu, Walter Chang
  • Patent number: 11232255
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed that collect and analyze annotation performance data to generate digital annotations for evaluating and training automatic electronic document annotation models. In particular, in one or more embodiments, the disclosed systems provide electronic documents to annotators based on annotator topic preferences. The disclosed systems then identify digital annotations and annotation performance data such as a time period spent by an annotator in generating digital annotations and annotator responses to digital annotation questions. Furthermore, in one or more embodiments, the disclosed systems utilize the identified digital annotations and the annotation performance data to generate a final set of reliable digital annotations. Additionally, in one or more embodiments, the disclosed systems provide the final set of digital annotations for utilization in training a machine learning model to generate annotations for electronic documents.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: January 25, 2022
    Assignee: Adobe Inc.
    Inventors: Franck Dernoncourt, Walter Chang, Trung Bui, Sean Fitzgerald, Sasha Spala, Kishore Aradhya, Carl Dockhorn
  • Patent number: 11080295
    Abstract: Techniques for organizing knowledge about a dataset storing data from or about multiple sources may be provided. For example, the data can be accessed from the multiple sources and categorized based on the data type. For each data type, a triple extraction technique specific to that data type may be invoked. One set of techniques can allow the extraction of triples from the data based on natural language-based rules. Another set of techniques can allow a similar extraction based on logical or structural-based rules. A triple may store a relationship between elements of the data. The extracted triples can be stored with corresponding identifiers in a list. Further, dictionaries storing associations between elements of the data and the triples can be updated. The list and the dictionaries can be used to return triples in response to a query that specifies one or more elements.
    Type: Grant
    Filed: November 11, 2014
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Walter Chang, Nicholas Digiuseppe