Patents by Inventor Abhilasha Sancheti

Abhilasha Sancheti has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11948095
    Abstract: A method for recommending digital content includes: determining user preferences and a time horizon of a given user; determining a group for the given user based on the determined user preferences; determining a number of users of the determined group and a similarity of the users; applying information including the number of users, the similarity, and the time horizon to a model selection classifier to select one of a personalized model of the user and a group model of the determined group; and running the selected model to determine digital content to recommend.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: April 2, 2024
    Assignee: ADOBE INC.
    Inventors: Abhilasha Sancheti, Zheng Wen, Iftikhar Ahamath Burhanuddin
  • Publication number: 20230419666
    Abstract: Techniques are described that support automated generation of a digital document from digital videos using machine learning. The digital document includes textual components that describe a sequence of entity and action descriptions from the digital video. These techniques are usable to generate a single digital document based on a plurality of digital videos as well as incorporate user-specified constraints in the generation of the digital document.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Applicant: Adobe Inc.
    Inventors: Niyati Himanshu Chhaya, Tripti Shukla, Jeevana Kruthi Karnuthala, Bhanu Prakash Reddy Guda, Ayudh Saxena, Abhinav Bohra, Abhilasha Sancheti, Aanisha Bhattacharyya
  • Patent number: 11783584
    Abstract: Techniques are described that support automated generation of a digital document from digital videos using machine learning. The digital document includes textual components that describe a sequence of entity and action descriptions from the digital video. These techniques are usable to generate a single digital document based on a plurality of digital videos as well as incorporate user-specified constraints in the generation of the digital document.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: October 10, 2023
    Assignee: Adobe Inc.
    Inventors: Niyati Himanshu Chhaya, Tripti Shukla, Jeevana Kruthi Karnuthala, Bhanu Prakash Reddy Guda, Ayudh Saxena, Abhinav Bohra, Abhilasha Sancheti, Aanisha Bhattacharyya
  • Publication number: 20230290146
    Abstract: Techniques are described that support automated generation of a digital document from digital videos using machine learning. The digital document includes textual components that describe a sequence of entity and action descriptions from the digital video. These techniques are usable to generate a single digital document based on a plurality of digital videos as well as incorporate user-specified constraints in the generation of the digital document.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 14, 2023
    Applicant: Adobe Inc.
    Inventors: Niyati Himanshu Chhaya, Tripti Shukla, Jeevana Kruthi Karnuthala, Bhanu Prakash Reddy Guda, Ayudh Saxena, Abhinav Bohra, Abhilasha Sancheti, Aanisha Bhattacharyya
  • Patent number: 11741190
    Abstract: In some embodiments, a style transfer computing system receives, from a computing device, an input text and a request to transfer the input text to a target style combination including a set of target styles. The system applies a style transfer language model associated with the target style combination to the input text to generate a transferred text in the target style combination. The style transfer language model comprises a cascaded language model configured to generate the transferred text. The cascaded language model is trained using a set of discriminator models corresponding to the set of target styles. The system provides, to the computing device, the transferred text.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: August 29, 2023
    Assignee: Adobe Inc.
    Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
  • Patent number: 11714972
    Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: August 1, 2023
    Assignee: Adobe Inc.
    Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
  • Publication number: 20230230358
    Abstract: Systems and methods for machine learning are described. The systems and methods include receiving target training data including a training image and ground truth label data for the training image, generating source network features for the training image using a source network trained on source training data, generating target network features for the training image using a target network, generating at least one attention map for training the target network based on the source network features and the target network features using a guided attention transfer network, and updating parameters of the target network based on the attention map and the ground truth label data.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 20, 2023
    Inventors: Divya Kothandaraman, Sumit Shekhar, Abhilasha Sancheti, Manoj Ghuhan Arivazhagan, Tripti Shukla
  • Publication number: 20220414400
    Abstract: In some embodiments, a style transfer computing system receives, from a computing device, an input text and a request to transfer the input text to a target style combination including a set of target styles. The system applies a style transfer language model associated with the target style combination to the input text to generate a transferred text in the target style combination. The style transfer language model comprises a cascaded language model configured to generate the transferred text. The cascaded language model is trained using a set of discriminator models corresponding to the set of target styles. The system provides, to the computing device, the transferred text.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
  • Patent number: 11487971
    Abstract: In some embodiments, a style transfer computing system generates a set of discriminator models corresponding to a set of styles based on a set of training datasets labeled for respective styles. The style transfer computing system further generates a style transfer language model for a target style combination that includes multiple target styles from the set of styles. The style transfer language model includes a cascaded language model and multiple discriminator models selected from the set of discriminator models. The style transfer computing system trains the style transfer language model to minimize a loss function containing a loss term for the cascaded language model and multiple loss terms for the multiple discriminator models. For a source sentence and a given target style combination, the style transfer computing system applies the style transfer language model on the source sentence to generate a target sentence in the given target style combination.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: November 1, 2022
    Assignee: Adobe Inc.
    Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
  • Patent number: 11322133
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate expressive audio for input texts based on a word-level analysis of the input text. For example, the disclosed systems can utilize a multi-channel neural network to generate a character-level feature vector and a word-level feature vector based on a plurality of characters of an input text and a plurality of words of the input text, respectively. In some embodiments, the disclosed systems utilize the neural network to generate the word-level feature vector based on contextual word-level style tokens that correspond to style features associated with the input text. Based on the character-level and word-level feature vectors, the disclosed systems can generate a context-based speech map. The disclosed systems can utilize the context-based speech map to generate expressive audio for the input text.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: May 3, 2022
    Assignee: Adobe Inc.
    Inventors: Sumit Shekhar, Gautam Choudhary, Abhilasha Sancheti, Shubhanshu Agarwal, E Santhosh Kumar, Rahul Saxena
  • Publication number: 20220121879
    Abstract: In some embodiments, a style transfer computing system generates a set of discriminator models corresponding to a set of styles based on a set of training datasets labeled for respective styles. The style transfer computing system further generates a style transfer language model for a target style combination that includes multiple target styles from the set of styles. The style transfer language model includes a cascaded language model and multiple discriminator models selected from the set of discriminator models. The style transfer computing system trains the style transfer language model to minimize a loss function containing a loss term for the cascaded language model and multiple loss terms for the multiple discriminator models. For a source sentence and a given target style combination, the style transfer computing system applies the style transfer language model on the source sentence to generate a target sentence in the given target style combination.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
  • Publication number: 20220075965
    Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.
    Type: Application
    Filed: November 18, 2021
    Publication date: March 10, 2022
    Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
  • Publication number: 20220028367
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate expressive audio for input texts based on a word-level analysis of the input text. For example, the disclosed systems can utilize a multi-channel neural network to generate a character-level feature vector and a word-level feature vector based on a plurality of characters of an input text and a plurality of words of the input text, respectively. In some embodiments, the disclosed systems utilize the neural network to generate the word-level feature vector based on contextual word-level style tokens that correspond to style features associated with the input text. Based on the character-level and word-level feature vectors, the disclosed systems can generate a context-based speech map. The disclosed systems can utilize the context-based speech map to generate expressive audio for the input text.
    Type: Application
    Filed: July 21, 2020
    Publication date: January 27, 2022
    Inventors: Sumit Shekhar, Gautam Choudhary, Abhilasha Sancheti, Shubhanshu Agarwal, E Santhosh Kumar, Rahul Saxena
  • Publication number: 20220019909
    Abstract: Methods, systems, and computer storage media for providing command recommendations for an analysis-goal, using analytics system operations in an analytics systems. In operation, an analytics client is configured to provide an analytics interface for receiving a selection of analysis-goal information that corresponds to an analysis-goal model. A goal engine selects an analysis-goal based on the analysis-goal information. A command engine is configured to use the analysis-goal and goal-driven models to predict probable commands for the analysis goal. The command engine also selects a next command recommendation from the probable commands. The command engine generates additional command recommendation data based on a loss function fine tuner. The additional command recommendation data can include a goal orientation score that quantifies a degree to which a command aligns with the analysis-goal.
    Type: Application
    Filed: July 14, 2020
    Publication date: January 20, 2022
    Inventors: Samarth Aggarwal, Rohin Garg, Bhanu Prakash Reddy Guda, Abhilasha Sancheti, Iftikhar Ahamath Burhanuddin
  • Patent number: 11210477
    Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: December 28, 2021
    Assignee: Adobe Inc.
    Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
  • Publication number: 20210158177
    Abstract: A method for recommending digital content includes: determining user preferences and a time horizon of a given user; determining a group for the given user based on the determined user preferences; determining a number of users of the determined group and a similarity of the users; applying information including the number of users, the similarity, and the time horizon to a model selection classifier to select one of a personalized model of the user and a group model of the determined group; and running the selected model to determine digital content to recommend.
    Type: Application
    Filed: November 21, 2019
    Publication date: May 27, 2021
    Inventors: ABHILASHA SANCHETI, ZHENG WEN, IFTIKHAR AHAMATH BURHANUDDIN
  • Patent number: 10915577
    Abstract: A framework is provided for constructing enterprise-specific knowledge bases from enterprise-specific data that includes structured and unstructured data. Relationships between entities that match known relationships are identified for each of a plurality of tuples included in the structured data. Where possible, relationships between entities that match known relationships also are identified for tuples included in the unstructured data. If matching relationships between entities that cannot be identified for tuples in the unstructured data, extracted relationships are sequentially clustered to similar relationships and a relationship is assigned to the clustered tuples.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: February 9, 2021
    Assignee: ADOBE INC.
    Inventors: Balaji Vasan Srinivasan, Rajat Chaturvedi, Tanya Goyal, Paridhi Maheshwari, Anish Valliyath Monsy, Abhilasha Sancheti
  • Patent number: 10846617
    Abstract: Methods and systems are provided for providing recommendations from a recommendation system for an analytics system. A recommendation system can be trained using user intent and context. Such user intent can be determined using a user history of interaction with an analytics system. The user history can either be that of the user accessing the recommendation system or an exemplary user history to broaden the recommendations made by the recommendation system. Such context can be determined using context features within the analytics system. The trained recommendation system generated using user intent and context can provide analytics recommendations based on a current context of a user that predict the intent of the user.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: November 24, 2020
    Assignee: Adobe Inc.
    Inventors: Iftikhar Ahamath Burhanuddin, Shriram Venkatesh Shet Revankar, Kushal Satya, Biswarup Bhattacharya, Abhilasha Sancheti
  • Publication number: 20200356634
    Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
  • Publication number: 20190294732
    Abstract: A framework is provided for constructing enterprise-specific knowledge bases from enterprise-specific data that includes structured and unstructured data. Relationships between entities that match known relationships are identified for each of a plurality of tuples included in the structured data. Where possible, relationships between entities that match known relationships also are identified for tuples included in the unstructured data. If matching relationships between entities that cannot be identified for tuples in the unstructured data, extracted relationships are sequentially clustered to similar relationships and a relationship is assigned to the clustered tuples.
    Type: Application
    Filed: March 22, 2018
    Publication date: September 26, 2019
    Inventors: BALAJI VASAN SRINIVASAN, RAJAT CHATURVEDI, TANYA GOYAL, PARIDHI MAHESHWARI, ANISH VALLIYATH MONSY, ABHILASHA SANCHETI