Patents by Inventor Balaji Krishnamurthy

Balaji Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210406935
    Abstract: Methods and systems are provided for generating and providing insights associated with a journey. In embodiments described herein, journey data associated with a journey is obtained. A journey can include journey paths indicating workflows through which audience members can traverse. The journey data can include audience member attributes (e.g., demographics) and labels indicating journey paths traversed by audience members. A set of audience segments are determined that describe a set of audience members traversing a particular journey path. The set of audience segments can be determined using the journey data to train a segmentation model and, thereafter, analyzing the segmentation model to identify patterns that indicate audience segments associated with the particular journey path. An indication of the set of audience segments that describe the set of audience members traversing the particular journey path can be provided for display.
    Type: Application
    Filed: June 24, 2020
    Publication date: December 30, 2021
    Inventors: Pankhri SINGHAI, Piyush GUPTA, Balaji KRISHNAMURTHY, Jayakumar SUBRAMANIAN, Nikaash PURI
  • Publication number: 20210397876
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for one-shot and few-shot image segmentation on classes of objects that were not represented during training. In some embodiments, a dual prediction scheme may be applied in which query and support masks are jointly predicted using a shared decoder, which aids in similarity propagation between the query and support features. Additionally or alternatively, foreground and background attentive fusion may be applied to utilize cues from foreground and background feature similarities between the query and support images. Finally, to prevent overfitting on class-conditional similarities across training classes, input channel averaging may be applied for the query image during training. Accordingly, the techniques described herein may be used to achieve state-of-the-art performance for both one-shot and few-shot segmentation tasks.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Inventors: Mayur Hemani, Siddhartha Gairola, Ayush Chopra, Balaji Krishnamurthy, Jonas Dahl
  • Publication number: 20210397986
    Abstract: Techniques described herein extract form structures from a static form to facilitate making that static form reflowable. A method described herein includes accessing low-level form elements extracted from a static form. The method includes determining, using a first set of prediction models, second-level form elements based on the low-level form elements. Each second-level form element includes a respective one or more low-level form elements. The method further includes determining, using a second set of prediction models, high-level form elements based on the second-level form elements and the low-level form elements. Each high-level form element includes a respective one or more second-level form elements or low-level form elements. The method further includes generating a reflowable form based on the static form by, for each high-level form element, linking together the respective one or more second-level form elements or low-level form elements.
    Type: Application
    Filed: June 17, 2020
    Publication date: December 23, 2021
    Inventors: Milan Aggarwal, Mausoom Sarkar, Balaji Krishnamurthy
  • Patent number: 11188579
    Abstract: Systems and methods are described for serving personalized content using content tagging and transfer learning. The method may include identifying content elements in an experience pool, where each of the content element is associated with one or more attribute tags, identifying a user profile comprising characteristics of a user, generating a set of user-tag affinity vectors based on the user profile and the corresponding attribute tags using a content personalization engine, generating a user-content affinity score based on the set of user-tag affinity vectors, selecting a content element from the plurality of content elements based on the corresponding user-content affinity score, and delivering the selected content element to the user.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: November 30, 2021
    Assignee: ADOBE INC.
    Inventors: Dheeraj Bansal, Sukriti Verma, Pratiksha Agarwal, Piyush Gupta, Nikaash Puri, Vishal Wani, Balaji Krishnamurthy
  • Publication number: 20210349915
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and render a varied-scale-topological construct for a multidimensional dataset to visually represent portions of the multidimensional dataset at different topological scales. In certain implementations, for example, the disclosed systems generate and combine (i) an initial topological construct for a multidimensional dataset at one scale and (ii) a local topological construct for a subset of the multidimensional dataset at another scale to form a varied-scale-topological construct. To identify a region from an initial topological construct to vary in scale, the disclosed systems can determine the relative densities of subsets of multidimensional data corresponding to regions of the initial topological construct and select one or more such regions to change in scale.
    Type: Application
    Filed: July 22, 2021
    Publication date: November 11, 2021
    Inventors: Akash Rupela, Piyush Gupta, Nupur Kumari, Bishal Deb, Balaji Krishnamurthy, Ankita Sarkar
  • Publication number: 20210342701
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Application
    Filed: May 4, 2020
    Publication date: November 4, 2021
    Inventors: Kumar AYUSH, Ayush CHOPRA, Patel Utkarsh GOVIND, Balaji KRISHNAMURTHY, Anirudh SINGHAL
  • Publication number: 20210327108
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate interactive visual shape representation of digital datasets. For example, the disclosed systems can generate an augmented nearest neighbor network graph from a sampled subset of digital data points using a nearest neighbor model and witness complex model. The disclosed system can further generate a landmark network graph based on the augmented nearest neighbor network graph utilizing a plurality of random walks. The disclosed systems can also generate a loop-augmented spanning network graph based on a partition of the landmark network graph by adding community edges between communities of landmark groups based on modularity and to complete community loops. Based on the loop-augmented spanning network graph, the disclosed systems can generate an interactive visual shape representation for display on a client device.
    Type: Application
    Filed: April 16, 2020
    Publication date: October 21, 2021
    Inventors: Nupur Kumari, Piyush Gupta, Akash Rupela, Siddarth R, Balaji Krishnamurthy
  • Publication number: 20210319473
    Abstract: Machine-learning based multi-step engagement strategy modification is described. Rather than rely heavily on human involvement to manage content delivery over the course of a campaign, the described learning-based engagement system modifies a multi-step engagement strategy, originally created by an engagement-system user, by leveraging machine-learning models. In particular, these leveraged machine-learning models are trained using data describing user interactions with delivered content as those interactions occur over the course of the campaign. Initially, the learning-based engagement system obtains a multi-step engagement strategy created by an engagement-system user. As the multi-step engagement strategy is deployed, the learning-based engagement system randomly adjusts aspects of the sequence of deliveries for some users.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 14, 2021
    Applicant: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nupur Kumari, Nikaash Puri, Mayank Singh, Eshita Shah, Balaji Krishnamurthy, Akash Rupela
  • Patent number: 11109084
    Abstract: Machine-learning based multi-step engagement strategy generation and visualization is described. Rather than rely heavily on human involvement to create delivery strategies, the described learning-based engagement system generates multi-step engagement strategies by leveraging machine-learning models trained using data describing historical user interactions with content delivered in connection with historical campaigns. Initially, the learning-based engagement system obtains data describing an entry condition and an exit condition for a campaign. Based on the entry and exit condition, the learning-based engagement system utilizes the machine-learning models to generate a multi-step engagement strategy, which describes a sequence of content deliveries that are to be served to a particular client device user (or segment of client device users).
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: August 31, 2021
    Assignee: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nikaash Puri, Eshita Shah, Balaji Krishnamurthy, Nupur Kumari, Mayank Singh, Akash Rupela
  • Patent number: 11107115
    Abstract: Machine-learning based multi-step engagement strategy modification is described. Rather than rely heavily on human involvement to manage content delivery over the course of a campaign, the described learning-based engagement system modifies a multi-step engagement strategy, originally created by an engagement-system user, by leveraging machine-learning models. In particular, these leveraged machine-learning models are trained using data describing user interactions with delivered content as those interactions occur over the course of the campaign. Initially, the learning-based engagement system obtains a multi-step engagement strategy created by an engagement-system user. As the multi-step engagement strategy is deployed, the learning-based engagement system randomly adjusts aspects of the sequence of deliveries for some users.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: August 31, 2021
    Assignee: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nupur Kumari, Nikaash Puri, Mayank Singh, Eshita Shah, Balaji Krishnamurthy, Akash Rupela
  • Patent number: 11100127
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and render a varied-scale-topological construct for a multidimensional dataset to visually represent portions of the multidimensional dataset at different topological scales. In certain implementations, for example, the disclosed systems generate and combine (i) an initial topological construct for a multidimensional dataset at one scale and (ii) a local topological construct for a subset of the multidimensional dataset at another scale to form a varied-scale-topological construct. To identify a region from an initial topological construct to vary in scale, the disclosed systems can determine the relative densities of subsets of multidimensional data corresponding to regions of the initial topological construct and select one or more such regions to change in scale.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: August 24, 2021
    Assignee: Adobe Inc.
    Inventors: Akash Rupela, Piyush Gupta, Nupur Kumari, Bishal Deb, Balaji Krishnamurthy, Ankita Sarkar
  • Publication number: 20210256387
    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.
    Type: Application
    Filed: February 18, 2020
    Publication date: August 19, 2021
    Applicant: Adobe Inc.
    Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
  • Patent number: 11080817
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 11073965
    Abstract: In some embodiments, a configuration management application accesses configuration data for a multi-target website. The configuration management application provides the user interface including a timeline area and a page display area. The timeline area is configured to display timeline entries corresponding to configurations of the multi-target website. Based on a selection of a timeline entry, the page display area is configured to display a webpage configuration corresponding to the selected timeline entry. In addition, the page display area is configured to display graphical annotations indicating interaction metrics for the configured page regions. In some cases, the timeline entries, configurations, and interaction metrics are determined based on a selection of a target segment for the multi-target website.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: July 27, 2021
    Assignee: ADOBE INC.
    Inventors: Harpreet Singh, Balaji Krishnamurthy, Akash Rupela
  • Patent number: 11030782
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
    Type: Grant
    Filed: November 9, 2019
    Date of Patent: June 8, 2021
    Assignee: ADOBE INC.
    Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 11017016
    Abstract: A method for clustering product media files is provided. The method includes dividing each media file corresponding to one or more products into a plurality of tiles. The media file include one of an image or a video. Feature vectors are computed for each tile of each media file. One or more patch clusters are generated using the plurality of tiles. Each patch cluster includes tiles having feature vectors similar to each other. The feature vectors of each media file are compared with feature vectors of each patch cluster. Based on comparison, product groups are then generated. All media files having comparison output similar to each other are grouped into one product group. Each product group includes one or more media files for one product. Apparatus for substantially performing the method as described herein is also provided.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: May 25, 2021
    Assignee: ADOBE INC.
    Inventors: Vikas Yadav, Balaji Krishnamurthy, Mausoom Sarkar, Rajiv Mangla, Gitesh Malik
  • Publication number: 20210142539
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
    Type: Application
    Filed: November 9, 2019
    Publication date: May 13, 2021
    Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 11003862
    Abstract: Classifying structural features of a digital document by feature type using machine learning is leveraged in a digital medium environment. A document analysis system is leveraged to extract structural features from digital documents, and to classifying the structural features by respective feature types. To do this, the document analysis system employs a character analysis model and a classification model. The character analysis model takes text content from a digital document and generates text vectors that represent the text content. A vector sequence is generated based on the text vectors and position information for structural features of the digital document, and the classification model processes the vector sequence to classify the structural features into different feature types. The document analysis system can generate a modifiable version of the digital document that enables its structural features to be modified based on their respective feature types.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: May 11, 2021
    Assignee: Adobe Inc.
    Inventors: Milan Aggarwal, Balaji Krishnamurthy
  • Publication number: 20210133919
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 6, 2021
    Applicant: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Publication number: 20210124993
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a classification neural network to classify digital images in few-shot tasks based on self-supervision and manifold mixup. For example, the disclosed systems can train a feature extractor as part of a base neural network utilizing self-supervision and manifold mixup. Indeed, the disclosed systems can apply manifold mixup regularization over a feature manifold learned via self-supervised training such as rotation training or exemplar training. Based on training the feature extractor, the disclosed systems can also train a classifier to classify digital images into novel classes not present within the base classes used to train the feature extractor.
    Type: Application
    Filed: October 23, 2019
    Publication date: April 29, 2021
    Inventors: Mayank Singh, Puneet Mangla, Nupur Kumari, Balaji Krishnamurthy, Abhishek Sinha