Patents by Inventor Balaji Krishnamurthy

Balaji Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200159371
    Abstract: In some embodiments, a configuration management application accesses configuration data for a multi-target website. The configuration management application provides the user interface including a timeline area and a page display area. The timeline area is configured to display timeline entries corresponding to configurations of the multi-target website. Based on a selection of a timeline entry, the page display area is configured to display a webpage configuration corresponding to the selected timeline entry. In addition, the page display area is configured to display graphical annotations indicating interaction metrics for the configured page regions. In some cases, the timeline entries, configurations, and interaction metrics are determined based on a selection of a target segment for the multi-target website.
    Type: Application
    Filed: November 16, 2018
    Publication date: May 21, 2020
    Inventors: Harpreet Singh, Balaji Krishnamurthy, Akash Rupela
  • Patent number: 10645467
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to navigation of a digital video. In one embodiment, a method may begin by partitioning a digital video into a number of sub-stories based at least in part on transition points identified within the digital video. The plurality of sub-stories can then be grouped into video segments based on the content of each sub-story. These video segments can then be packaged into a navigation panel in accordance with a selected template that defines a layout for the navigation panel. Such a navigation panel can present the video segments to a viewer in an interactive graphical manner that enables the viewer to navigate the one or more video segments. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: November 5, 2015
    Date of Patent: May 5, 2020
    Assignee: Adobe Inc.
    Inventors: Balaji Krishnamurthy, Sunandini Basu, Nutan Sawant
  • Publication number: 20200126100
    Abstract: Techniques are described for machine learning-based generation of target segments is leveraged in a digital medium environment. A segment targeting system generates training data to train a machine learning model to predict strength of correlation between a set of users and a defined demographic. Further, a machine learning model is trained with visit statistics for the users to predict the likelihood that the users will visit a particular digital content platform. Those users with the highest predicted correlation with the defined demographic and the highest likelihood to visit the digital content platform can be selected and placed within a target segment, and digital content targeted to the defined demographic can be delivered to users in the target segment.
    Type: Application
    Filed: October 23, 2018
    Publication date: April 23, 2020
    Applicant: Adobe Inc.
    Inventors: Praveen Kumar Goyal, Piyush Gupta, Nikaash Puri, Balaji Krishnamurthy, Arun Kumar, Atul Kumar Shrivastava
  • Patent number: 10609434
    Abstract: Machine-learning based multi-step engagement strategy generation and visualization is described. Rather than rely heavily on human involvement to create delivery strategies, the described learning-based engagement system generates multi-step engagement strategies by leveraging machine-learning models trained using data describing historical user interactions with content delivered in connection with historical campaigns. Initially, the learning-based engagement system obtains data describing an entry condition and an exit condition for a campaign. Based on the entry and exit condition, the learning-based engagement system utilizes the machine-learning models to generate a multi-step engagement strategy, which describes a sequence of content deliveries that are to be served to a particular client device user (or segment of client device users).
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nikaash Puri, Eshita Shah, Balaji Krishnamurthy, Nupur Kumari, Mayank Singh, Akash Rupela
  • Publication number: 20200092593
    Abstract: Machine-learning based multi-step engagement strategy generation and visualization is described. Rather than rely heavily on human involvement to create delivery strategies, the described learning-based engagement system generates multi-step engagement strategies by leveraging machine-learning models trained using data describing historical user interactions with content delivered in connection with historical campaigns. Initially, the learning-based engagement system obtains data describing an entry condition and an exit condition for a campaign. Based on the entry and exit condition, the learning-based engagement system utilizes the machine-learning models to generate a multi-step engagement strategy, which describes a sequence of content deliveries that are to be served to a particular client device user (or segment of client device users).
    Type: Application
    Filed: November 25, 2019
    Publication date: March 19, 2020
    Applicant: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nikaash Puri, Eshita Shah, Balaji Krishnamurthy, Nupur Kumari, Mayank Singh, Akash Rupela
  • Publication number: 20200051118
    Abstract: Machine-learning based multi-step engagement strategy modification is described. Rather than rely heavily on human involvement to manage content delivery over the course of a campaign, the described learning-based engagement system modifies a multi-step engagement strategy, originally created by an engagement-system user, by leveraging machine-learning models. In particular, these leveraged machine-learning models are trained using data describing user interactions with delivered content as those interactions occur over the course of the campaign. Initially, the learning-based engagement system obtains a multi-step engagement strategy created by an engagement-system user. As the multi-step engagement strategy is deployed, the learning-based engagement system randomly adjusts aspects of the sequence of deliveries for some users.
    Type: Application
    Filed: August 7, 2018
    Publication date: February 13, 2020
    Applicant: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nupur Kumari, Nikaash Puri, Mayank Singh, Eshita Shah, Balaji Krishnamurthy, Akash Rupela
  • Publication number: 20200053403
    Abstract: Machine-learning based multi-step engagement strategy generation and visualization is described. Rather than rely heavily on human involvement to create delivery strategies, the described learning-based engagement system generates multi-step engagement strategies by leveraging machine-learning models trained using data describing historical user interactions with content delivered in connection with historical campaigns. Initially, the learning-based engagement system obtains data describing an entry condition and an exit condition for a campaign. Based on the entry and exit condition, the learning-based engagement system utilizes the machine-learning models to generate a multi-step engagement strategy, which describes a sequence of content deliveries that are to be served to a particular client device user (or segment of client device users).
    Type: Application
    Filed: August 7, 2018
    Publication date: February 13, 2020
    Applicant: Adobe Inc.
    Inventors: Pankhri Singhai, Sundeep Parsa, Piyush Gupta, Nikaash Puri, Eshita Shah, Balaji Krishnamurthy, Nupur Kumari, Mayank Singh, Akash Rupela
  • Patent number: 10515400
    Abstract: Learning vector-space representations of items for recommendations using word embedding models is described. In one or more embodiments, a word embedding model is used to produce item vector representations of items based on considering items interacted with as words and items interacted with during sessions as sentences. The item vectors are used to produce item recommendations similar to currently or recently viewed items.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Balaji Krishnamurthy, Raghavender Goel, Nikaash Puri
  • Patent number: 10423828
    Abstract: Techniques for determining reading order in a document. A current labeled text run (R1), RIGHT text run (R1) and DOWN text run (R3) are generated. The R1 labeled text run is processed by a first LSTM, the R2 labeled text run is processed by a second LSTM, and the R3 labeled text run is processed by a third LSTM, wherein each of the LSTMs generates a respective internal representation (R1?, R2? and R3?). Deep learning tools other than LSTMs can be used, as will be appreciated. The respective internal representations R1?, R2? and R3? are concatenated or otherwise combined into a vector or tensor representation and provided to a classifier network that generates a predicted label for a next text run as RIGHT, DOWN or EOS in the reading order of the document.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: September 24, 2019
    Assignee: Adobe Inc.
    Inventors: Shagun Sodhani, Kartikay Garg, Balaji Krishnamurthy
  • Publication number: 20190286691
    Abstract: Caption association techniques as part of digital content creation by a computing device are described. The computing device is configured to extract text features and bounding boxes from an input document. These text features and bounding boxes are processed to reduce a number of possible search spaces. The processing may involve generating and utilizing a language model that captures the semantic meaning of the text features to identify and filter static text, and may involve identifying and filtering inline captions. A number of bounding boxes are identified for a potential caption. The potential caption and corresponding identified bounding boxes are concatenated into a vector. The concatenated vector is used to identify relationships among the bounding boxes to determine a single bounding box associated with the caption. The determined association is utilized to generate an output digital document that includes a structured association between the caption and a data entry field.
    Type: Application
    Filed: March 19, 2018
    Publication date: September 19, 2019
    Applicant: Adobe Inc.
    Inventors: Shagun Sodhani, Kartikay Garg, Balaji Krishnamurthy
  • Publication number: 20190286978
    Abstract: Systems and techniques map an input field from a data schema to a hierarchical standard data model (XDM). The XDM includes a tree of single XDM fields and each of the single XDM fields includes a composition of single level XDM fields. An input field from a data schema is processed by an unsupervised learning algorithm to obtain a sequence of vectors representing the input field and a sequence of vectors representing single level hierarchical standard data model (XDM) fields. These vectors are processed by a neural network to obtain a similarity score between the input field and each of the single level XDM fields. A probability of a match is determined using the similarity score between the input field and each of the single level XDM fields. The input field is mapped to the XDM field having the probability of the match with a highest score.
    Type: Application
    Filed: March 14, 2018
    Publication date: September 19, 2019
    Inventors: Milan Aggarwal, Balaji Krishnamurthy, Shagun Sodhani
  • Publication number: 20190287139
    Abstract: Embodiments of the present invention provide systems and methods for automatically generating a shoppable video. A video is parsed into one or more scenes. Products and their corresponding product information are automatically associated with the one or more scenes. The shoppable video is then generated using the associated products and corresponding product information such that the products are visible in the shoppable video based on a scene in which the products are found.
    Type: Application
    Filed: June 3, 2019
    Publication date: September 19, 2019
    Inventors: VIKAS YADAV, BALAJI KRISHNAMURTHY, MAUSOOM SARKAR, RAJIV MANGLA, GITESH MALIK
  • Patent number: 10354290
    Abstract: Embodiments of the present invention provide systems and methods for automatically generating a shoppable video. A video is parsed into one or more scenes. Products and their corresponding product information are automatically associated with the one or more scenes. The shoppable video is then generated using the associated products and corresponding product information such that the products are visible in the shoppable video based on a scene in which the products are found.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: July 16, 2019
    Assignee: Adobe, Inc.
    Inventors: Vikas Yadav, Balaji Krishnamurthy, Mausoom Sarkar, Rajiv Mangla, Gitesh Malik
  • Publication number: 20190188463
    Abstract: Techniques for determining reading order in a document. A current labeled text run (R1), RIGHT text run (R1) and DOWN text run (R3) are generated. The R1 labeled text run is processed by a first LSTM, the R2 labeled text run is processed by a second LSTM, and the R3 labeled text run is processed by a third LSTM, wherein each of the LSTMs generates a respective internal representation (R1?, R2? and R3?). Deep learning tools other than LSTMs can be used, as will be appreciated. The respective internal representations R1?, R2? and R3? are concatenated or otherwise combined into a vector or tensor representation and provided to a classifier network that generates a predicted label for a next text run as RIGHT, DOWN or EOS in the reading order of the document.
    Type: Application
    Filed: December 15, 2017
    Publication date: June 20, 2019
    Applicant: Adobe Inc.
    Inventors: Shagun Sodhani, Kartikay Garg, Balaji Krishnamurthy
  • Publication number: 20190156216
    Abstract: A technique is disclosed for generating class level rules that globally explain the behavior of a machine learning model, such as a model that has been used to solve a classification problem. Each class level rule represents a logical conditional statement that, when the statement holds true for one or more instances of a particular class, predicts that the respective instances are members of the particular class. Collectively, these rules represent the pattern followed by the machine learning model. The techniques are model agnostic, and explain model behavior in a relatively easy to understand manner by outputting a set of logical rules that can be readily parsed. Although the techniques can be applied to any number of applications, in some embodiments, the techniques are suitable for interpreting models that perform the task of classification. Other machine learning model applications can equally benefit.
    Type: Application
    Filed: November 17, 2017
    Publication date: May 23, 2019
    Applicant: Adobe Inc.
    Inventors: Piyush Gupta, Nikaash Puri, Balaji Krishnamurthy
  • Publication number: 20190147369
    Abstract: Rule determination for black-box machine-learning models (BBMLMs) is described. These rules are determined by an interpretation system to describe operation of a BBMLM to associate inputs to the BBMLM with observed outputs of the BBMLM and without knowledge of the logic used in operation by the BBMLM to make these associations. To determine these rules, the interpretation system initially generates a proxy black-box model to imitate the behavior of the BBMLM based solely on data indicative of the inputs and observed outputs—since the logic actually used is not available to the system. The interpretation system generates rules describing the operation of the BBMLM by combining conditions—identified based on output of the proxy black-box model—using a genetic algorithm. These rules are output as if-then statements configured with an if-portion formed as a list of the conditions and a then-portion having an indication of the associated observed output.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Applicant: Adobe Inc.
    Inventors: Piyush Gupta, Sukriti Verma, Pratiksha Agarwal, Nikaash Puri, Balaji Krishnamurthy
  • Publication number: 20190138917
    Abstract: Behavioral prediction for targeted end users is described. In one or more example embodiments, a computer-readable storage medium has multiple instructions that cause one or more processors to perform multiple operations. Targeted selectstream data is obtained from one or more indications of data object requests corresponding to a targeted end user. A targeted directed graph is constructed based on the targeted selectstream data. A targeted graph feature vector is computed based on one or more invariant features associated with the targeted directed graph. A behavioral prediction is produced for the targeted end user by applying a prediction model to the targeted graph feature vector. In one or more example embodiments, the prediction model is generated based on multiple graph feature vectors respectively corresponding to multiple end users. In one or more example embodiments, a tailored opportunity is determined responsive to the behavioral prediction and issued to the targeted end user.
    Type: Application
    Filed: January 7, 2019
    Publication date: May 9, 2019
    Applicant: Adobe Inc.
    Inventors: Balaji Krishnamurthy, Tushar Singla
  • Patent number: 10268883
    Abstract: A method and system for detecting and extracting accurate and precise structure in documents. A high-resolution image of documents is segmented into a set of tiles. Each tile is processed by a convolutional network and subsequently by a set of recurrent networks for each row and column. A global-lookup process is disclosed that allows “future” information required for accurate assessment by the recurrent neural networks to be considered. Utilization of high-resolution image allows for precise and accurate feature extraction while segmentation into tiles facilitates the tractable processing of the high-resolution image within reasonable computational resource bounds.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: April 23, 2019
    Assignee: Adobe Inc.
    Inventors: Mausoom Sarkar, Balaji Krishnamurthy
  • Publication number: 20190114687
    Abstract: A digital medium environment is described to facilitate recommendations based on vectors generated using feature word embeddings. A recommendation system receives data that describes at least one attribute for a user profile, at least one item, and an interaction between the user profile and the at least one item. The recommendation system associates each user profile attribute, each item, and each interaction between a user profile and an item as a word, using natural language processing, and combines the words into sentences. The sentences are input to a word embedding model to determine feature vector representations describing relationships between the user profile attributes, items, and explicit and implicit interactions. From the feature vector representations, the recommendation system ascertains a similarity between different features.
    Type: Application
    Filed: October 17, 2017
    Publication date: April 18, 2019
    Applicant: Adobe Systems Incorporated
    Inventors: Balaji Krishnamurthy, Nikaash Puri
  • Publication number: 20190114673
    Abstract: Digital experience targeting techniques are disclosed which serve digital experiences that have a high probability of conversion with regard to a given user visit profile. In some examples, a method may include predicting a probability of each digital experience in a campaign being served based on a user visit profile and an indication that a user exhibiting the user visit profile is going to convert, predicting a probability of each digital experience in the campaign being served based on the user visit profile and an indication that the user exhibiting the user visit profile is not going to convert, and deriving, for the user visit profile, a probability of conversion for each digital experience in the campaign. The probability of conversion for each digital experience in the campaign for the user visit profile may be derived using a Bayesian framework.
    Type: Application
    Filed: October 18, 2017
    Publication date: April 18, 2019
    Applicant: AdobeInc.
    Inventors: Piyush Gupta, Nikaash Puri, Balaji Krishnamurthy