Patents by Inventor Handong Zhao

Handong Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250061609
    Abstract: One or more aspects of the method, apparatus, and non-transitory computer readable medium include obtaining image data and computing a prediction residue value for a pixel of the image data using a prediction function. An entropy value for the pixel can then be determined based on the prediction residue value using context modeling, and progressive compressed image data for the image data can be generated based on the entropy value. The compressed image data can be used to enable collaborative image editing and other image processing tasks.
    Type: Application
    Filed: August 17, 2023
    Publication date: February 20, 2025
    Inventors: Junda Wu, Haoliang Wang, Tong Yu, Stefano Petrangeli, Gang Wu, Viswanathan Swaminathan, Sungchul Kim, Handong Zhao
  • Publication number: 20250037006
    Abstract: In various examples, a ranking is generated for a set of computing instances based on predicted metrics associated with computing instances. For example, a prediction model estimates various system performance metrics based on information associated with a workload and configuration information associated with computing instances. The system performance metrics estimated by the prediction model are used to rank the set of computing instances.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 30, 2025
    Inventors: Kanak MAHADIK, Sungchul KIM, Ryan ROSSI, Handong ZHAO, Shravika MITTAL
  • Publication number: 20250028751
    Abstract: Dialogue skeleton assisted prompt transfer for dialogue summarization techniques are described that support training of a language model to perform dialogue summarization in a few-shot scenario. A processing device, for instance, receives a training dataset that includes training dialogues. The processing device then generates dialogue skeletons based on the training dialogues using one or more perturbation-based probes. The processing device trains a language model using prompt transfer between a source task, e.g., dialogue state tracking, and a target task, e.g., dialogue summarization, using the dialogue skeletons as supervision. The processing device then receives an input dialogue and uses the trained language model to generate a summary of the input dialogue.
    Type: Application
    Filed: July 20, 2023
    Publication date: January 23, 2025
    Applicant: Adobe Inc.
    Inventors: Tong Yu, Kaige Xie, Haoliang Wang, Junda Wu, Handong Zhao, Ruiyi Zhang, Kanak Vivek Mahadik, Ani Nenkova
  • Publication number: 20250013866
    Abstract: Systems and methods for reducing inference time of vision-language models, as well as for multimodal search, are described herein. Embodiments are configured to obtain an embedding neural network. The embedding neural network is pretrained to embed inputs from a plurality of modalities into a multimodal embedding space. Embodiments are further configured to perform a first progressive pruning stage, where the first progressive pruning stage includes a first pruning of the embedding neural network and a first fine-tuning of the embedding neural network. Embodiments then perform a second progressive pruning stage based on an output of the first progressive pruning stage, where the second progressive pruning stage includes a second pruning of the embedding neural network and a second fine-tuning of the embedding neural network.
    Type: Application
    Filed: July 6, 2023
    Publication date: January 9, 2025
    Inventors: Handong Zhao, Yue Bai, Zhe Lin, Ajinkya Gorakhnath Kale, Jiuxiang Gu, Tong Yu, Sungchul Kim
  • Publication number: 20250005289
    Abstract: Dialogue state aware dialogue summarization techniques are described that enable generation of dialogue summaries from target domains with limited training data. A content processing system, for instance, generates one or more clusters based on training dialogues from one or more source domains. The clusters represent domain-specific features of the training dialogues and are further based on dialogue states of the training dialogues. The content processing system trains a machine learning model to generate summaries of dialogues by using the one or more clusters as prefixes in a prefix-tuning approach. The content processing system receives an input that includes a dialogue from a target domain. The content processing system generates an input prompt based on the dialogue and the one or more clusters, and the model generates a summary of the dialogue based on the input prompt.
    Type: Application
    Filed: June 28, 2023
    Publication date: January 2, 2025
    Applicant: Adobe Inc.
    Inventors: Haoliang Wang, Kaige Xie, Tong Yu, Junda Wu, Handong Zhao, Ruiyi Zhang, Kanak Vivek Mahadik, Ani Nenkova
  • Patent number: 12182713
    Abstract: Systems and techniques for multi-task equidistant embedding are described that process categorical feature data to explore feature interactions. A digital analytics system enforces an equidistant relationship among features within a category while extracting high-order feature interactions by punishing both positive correlations and negative correlations among low-dimensional representations of different features. By enforcing an equidistant embedding, information is retained and accuracy is increased while higher order feature interactions are determined. Further, the digital analytics system shares knowledge among different tasks by connecting a shared network representation common to multiple tasks with exclusive network representations specific to particular tasks.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: December 31, 2024
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zheng Wen, Sungchul Kim, Sheng Li, Branislav Kveton
  • Patent number: 12182086
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating automatic suggestions to effectively modify the organization of an ingested data collection without destruction of the underlying raw data. In particular, in one or more embodiments, the disclosed systems utilize multiple machine learning models in sequence to determine likelihoods that the organizational structure of an ingested data collection should be modified in various ways. In response to generating these likelihoods, the disclosed systems generate corresponding automatic suggestions to modify the organization of the ingested data collection. In response to a detected selection of one or more of the automatic suggestions, the disclosed systems read data out of the ingested data collection in accordance with the selected automatic suggestions to effectively modify the organization of the ingested data collection.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: December 31, 2024
    Assignee: Adobe Inc.
    Inventors: Ritwik Sinha, Saayan Mitra, Handong Zhao, Somdeb Sarkhel, Trevor Paulsen, William Brandon George
  • Publication number: 20240427998
    Abstract: Contextual query generation techniques are described that enable generation of a contextual query for output to a question-answering (QA) model. A content processing system, for instance, configures a language model using in-context learning to generate queries based on semantic contexts of input documents, e.g., based on one or more linguistic cues from text of the input documents. The content processing system receives an input that includes a document having text and a reference query. The content processing system leverages the language model to generate a contextual query based on a semantic context of the text of the document and the reference query. The content processing system then outputs the contextual query and the document to a QA model. Using the QA model, the content processing system generates a response as an answer to the contextual query based on the contextual query and the document.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Applicant: Adobe Inc.
    Inventors: Haoliang Wang, Tong Yu, Sungchul Kim, Ruiyi Zhang, Paiheng Xu, Junda Wu, Handong Zhao, Ani Nenkova
  • Publication number: 20240404243
    Abstract: Systems and methods for multimodal machine learning are provided. According to one aspect, a method for multimodal machine learning includes obtaining a prompt; encoding the prompt using a multimodal encoder to obtain a prompt embedding, wherein the encoding comprises generating a plurality of multi-head attention (MHA) outputs corresponding to a plurality of different scales, respectively, and combining the plurality of MHA outputs using a multi-scale aggregator; and generating a response to the prompt based on the prompt embedding.
    Type: Application
    Filed: June 5, 2023
    Publication date: December 5, 2024
    Inventors: Handong Zhao, Yue Bai, Zhe Lin, Ajinkya Gorakhnath Kale, Jiuxiang Gu, Tong Yu, Sungchul Kim
  • Patent number: 12124439
    Abstract: Digital content search techniques are described that overcome the challenges found in conventional sequence-based techniques through use of a query-aware sequential search. In one example, a search query is received and sequence input data is obtained based on the search query. The sequence input data describes a sequence of digital content and respective search queries. Embedding data is generated based on the sequence input data using an embedding module of a machine-learning model. The embedding module includes a query-aware embedding layer that generates embeddings of the sequence of digital content and respective search queries. A search result is generated referencing at least one item of digital content by processing the embedding data using at least one layer of the machine-learning model.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: October 22, 2024
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zhe Lin, Zhaowen Wang, Zhankui He, Ajinkya Gorakhnath Kale
  • Publication number: 20240311221
    Abstract: In implementations of systems for detection and interpretation of log anomalies, a computing device implements an anomaly system to receive input data describing a two-dimensional representation of log templates and timestamps. The anomaly system processes the input data using a machine learning model trained on training data to detect anomalies in two-dimensional representations of log templates and timestamps. A log anomaly is detected in the two-dimensional representation using the machine learning model based on processing the input data. The anomaly system generates an indication of an interpretation of the log anomaly for display in a user interface based on a log template included in the two-dimensional representation.
    Type: Application
    Filed: March 13, 2023
    Publication date: September 19, 2024
    Applicant: Adobe Inc.
    Inventors: Jaeho Bang, Sungchul Kim, Ryan A. Rossi, Tong Yu, Handong Zhao
  • Publication number: 20240273296
    Abstract: Embodiments of the technology described herein describe a machine classifier capable of continually learning new classes through a continual few-shot learning approach. A natural language processing (NLP) machine classifier may initially be trained to identify a plurality of other classes through a conventional training process. In order to learn a new class, natural-language training data for a new class is generated. The training data for the new class may be few-shot training data. The training also uses synthetic training data that represents each of the plurality of other classes. The synthetic training data may be generated through a model inversion of the original classifier. The synthetic training data and the natural-language training data are used to retrain the NLP classifier to identify text in the plurality of other classes and the new class using.
    Type: Application
    Filed: April 3, 2024
    Publication date: August 15, 2024
    Inventors: Sungchul KIM, Subrata MITRA, Ruiyi Zhang, Rui Wang, Handong ZHAO, Tong YU
  • Patent number: 12019671
    Abstract: Digital content search techniques are described. In one example, the techniques are incorporated as part of a multi-head self-attention module of a transformer using machine learning. A localized self-attention module, for instance, is incorporated as part of the multi-head self-attention module that applies local constraints to the sequence. This is performable in a variety of ways. In a first instance, a model-based local encoder is used, examples of which include a fixed-depth recurrent neural network (RNN) and a convolutional network. In a second instance, a masking-based local encoder is used, examples of which include use of a fixed window, Gaussian initialization, and an adaptive predictor.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zhankui He, Zhaowen Wang, Ajinkya Gorakhnath Kale, Zhe Lin
  • Patent number: 11995403
    Abstract: Embodiments of the technology described herein describe a machine classifier capable of continually learning new classes through a continual few-shot learning approach. A natural language processing (NLP) machine classifier may initially be trained to identify a plurality of other classes through a conventional training process. In order to learn a new class, natural-language training data for a new class is generated. The training data for the new class may be few-shot training data. The training also uses synthetic training data that represents each of the plurality of other classes. The synthetic training data may be generated through a model inversion of the original classifier. The synthetic training data and the natural-language training data are used to retrain the NLP classifier to identify text in the plurality of other classes and the new class using.
    Type: Grant
    Filed: November 11, 2021
    Date of Patent: May 28, 2024
    Assignee: ADOBE INC.
    Inventors: Sungchul Kim, Subrata Mitra, Ruiyi Zhang, Rui Wang, Handong Zhao, Tong Yu
  • Patent number: 11995048
    Abstract: Systems and methods for lifelong schema matching are described. The systems and methods include receiving data comprising a plurality of information categories, classifying each information category according to a schema comprising a plurality of classes, wherein the classification is performed by a neural network classifier trained based on a lifelong learning technique using a plurality of exemplar training sets, wherein each of the exemplar training sets includes a plurality of examples corresponding to one of the classes, and wherein the examples are selected based on a metric indicating how well each of the examples represents the corresponding class, and adding the data to a database based on the classification, wherein the database is organized according to the schema.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: May 28, 2024
    Assignee: ADOBE INC.
    Inventors: Handong Zhao, Yikun Xian, Sungchul Kim, Tak Yeon Lee, Nikhil Belsare, Shashi Kant Rai, Vasanthi Holtcamp, Thomas Jacobs, Duy-Trung T Dinh, Caroline Jiwon Kim
  • Publication number: 20240169410
    Abstract: Techniques for predicting and recommending item bundles in a multi-round conversation to discover a target item bundle that would be accepted by a client. An example method includes receiving an input response in reply to a first item bundle that includes one or more items. A state model is updated to reflect the input response to the first item bundle. A machine-learning (ML) conversation module is applied to the state model to determine an action type as a follow-up to the input response to the first item bundle. Based on selection of a recommendation action as the action type, an ML bundling module is applied to the state model to generate a second item bundle different than the first item bundle. The second item bundle is then recommended.
    Type: Application
    Filed: November 4, 2022
    Publication date: May 23, 2024
    Inventors: Handong Zhao, Zhankui He, Tong Yu, Fan Du, Sungchul Kim
  • Publication number: 20240152771
    Abstract: Tabular data machine-learning model techniques and systems are described. In one example, common-sense knowledge is infused into training data through use of a knowledge graph to provide external knowledge to supplement a tabular data corpus. In another example, a dual-path architecture is employed to configure an adapter module. In an implementation, the adapter module is added as part of a pre-trained machine-learning model for general purpose tabular models. Specifically, dual-path adapters are trained using the knowledge graphs and semantically augmented trained data. A path-wise attention layer is applied to fuse a cross-modality representation of the two paths for a final result.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Can Qin, Sungchul Kim, Tong Yu, Ryan A. Rossi, Handong Zhao
  • Publication number: 20240152769
    Abstract: Systems and methods for automatic forecasting are described. Embodiments of the present disclosure receive a time-series dataset; compute a time-series meta-feature vector based on the time-series dataset; generate a performance score for a forecasting model using a meta-learner machine learning model that takes the time-series meta-feature vector as input; select the forecasting model from a plurality of forecasting models based on the performance score; and generate predicted time-series data based on the time-series dataset using the selected forecasting model.
    Type: Application
    Filed: October 28, 2022
    Publication date: May 9, 2024
    Inventors: Ryan A. Rossi, Kanak Mahadik, Mustafa Abdallah ElHosiny Abdallah, Sungchul Kim, Handong Zhao
  • Patent number: 11978272
    Abstract: Adapting a machine learning model to process data that differs from training data used to configure the model for a specified objective is described. A domain adaptation system trains the model to process new domain data that differs from a training data domain by using the model to generate a feature representation for the new domain data, which describes different content types included in the new domain data. The domain adaptation system then generates a probability distribution for each discrete region of the new domain data, which describes a likelihood of the region including different content described by the feature representation. The probability distribution is compared to ground truth information for the new domain data to determine a loss function, which is used to refine model parameters. After determining that model outputs achieve a threshold similarity to the ground truth information, the model is output as a domain-agnostic model.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Kai Li, Christopher Alan Tensmeyer, Curtis Michael Wigington, Handong Zhao, Nikolaos Barmpalios, Tong Sun, Varun Manjunatha, Vlad Ion Morariu
  • Patent number: 11886815
    Abstract: One example method involves operations for a processing device that include receiving, by a machine learning model trained to generate a search result, a search query for a text input. The machine learning model is trained by receiving pre-training data that includes multiple documents. Pre-training the machine learning model by generating, using an encoder, feature embeddings for each of the documents included in the pre-training data. The feature embeddings are generated by applying a masking function to visual and textual features in the documents. Training the machine learning model also includes generating, using the feature embeddings, output features for the documents by concatenating the feature embeddings and applying a non-linear mapping to the feature embeddings. Training the machine learning model further includes applying a linear classifier to the output features. Additionally, operations include generating, for display, a search result using the machine learning model based on the input.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Jiuxiang Gu, Vlad Morariu, Varun Manjunatha, Tong Sun, Rajiv Jain, Peizhao Li, Jason Kuen, Handong Zhao