Patents by Inventor Abhilasha Sancheti
Abhilasha Sancheti has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11948095Abstract: A method for recommending digital content includes: determining user preferences and a time horizon of a given user; determining a group for the given user based on the determined user preferences; determining a number of users of the determined group and a similarity of the users; applying information including the number of users, the similarity, and the time horizon to a model selection classifier to select one of a personalized model of the user and a group model of the determined group; and running the selected model to determine digital content to recommend.Type: GrantFiled: November 21, 2019Date of Patent: April 2, 2024Assignee: ADOBE INC.Inventors: Abhilasha Sancheti, Zheng Wen, Iftikhar Ahamath Burhanuddin
-
Publication number: 20230419666Abstract: Techniques are described that support automated generation of a digital document from digital videos using machine learning. The digital document includes textual components that describe a sequence of entity and action descriptions from the digital video. These techniques are usable to generate a single digital document based on a plurality of digital videos as well as incorporate user-specified constraints in the generation of the digital document.Type: ApplicationFiled: September 11, 2023Publication date: December 28, 2023Applicant: Adobe Inc.Inventors: Niyati Himanshu Chhaya, Tripti Shukla, Jeevana Kruthi Karnuthala, Bhanu Prakash Reddy Guda, Ayudh Saxena, Abhinav Bohra, Abhilasha Sancheti, Aanisha Bhattacharyya
-
Patent number: 11783584Abstract: Techniques are described that support automated generation of a digital document from digital videos using machine learning. The digital document includes textual components that describe a sequence of entity and action descriptions from the digital video. These techniques are usable to generate a single digital document based on a plurality of digital videos as well as incorporate user-specified constraints in the generation of the digital document.Type: GrantFiled: March 10, 2022Date of Patent: October 10, 2023Assignee: Adobe Inc.Inventors: Niyati Himanshu Chhaya, Tripti Shukla, Jeevana Kruthi Karnuthala, Bhanu Prakash Reddy Guda, Ayudh Saxena, Abhinav Bohra, Abhilasha Sancheti, Aanisha Bhattacharyya
-
Publication number: 20230290146Abstract: Techniques are described that support automated generation of a digital document from digital videos using machine learning. The digital document includes textual components that describe a sequence of entity and action descriptions from the digital video. These techniques are usable to generate a single digital document based on a plurality of digital videos as well as incorporate user-specified constraints in the generation of the digital document.Type: ApplicationFiled: March 10, 2022Publication date: September 14, 2023Applicant: Adobe Inc.Inventors: Niyati Himanshu Chhaya, Tripti Shukla, Jeevana Kruthi Karnuthala, Bhanu Prakash Reddy Guda, Ayudh Saxena, Abhinav Bohra, Abhilasha Sancheti, Aanisha Bhattacharyya
-
Patent number: 11741190Abstract: In some embodiments, a style transfer computing system receives, from a computing device, an input text and a request to transfer the input text to a target style combination including a set of target styles. The system applies a style transfer language model associated with the target style combination to the input text to generate a transferred text in the target style combination. The style transfer language model comprises a cascaded language model configured to generate the transferred text. The cascaded language model is trained using a set of discriminator models corresponding to the set of target styles. The system provides, to the computing device, the transferred text.Type: GrantFiled: September 2, 2022Date of Patent: August 29, 2023Assignee: Adobe Inc.Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
-
Patent number: 11714972Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.Type: GrantFiled: November 18, 2021Date of Patent: August 1, 2023Assignee: Adobe Inc.Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
-
Publication number: 20230230358Abstract: Systems and methods for machine learning are described. The systems and methods include receiving target training data including a training image and ground truth label data for the training image, generating source network features for the training image using a source network trained on source training data, generating target network features for the training image using a target network, generating at least one attention map for training the target network based on the source network features and the target network features using a guided attention transfer network, and updating parameters of the target network based on the attention map and the ground truth label data.Type: ApplicationFiled: January 20, 2022Publication date: July 20, 2023Inventors: Divya Kothandaraman, Sumit Shekhar, Abhilasha Sancheti, Manoj Ghuhan Arivazhagan, Tripti Shukla
-
Publication number: 20220414400Abstract: In some embodiments, a style transfer computing system receives, from a computing device, an input text and a request to transfer the input text to a target style combination including a set of target styles. The system applies a style transfer language model associated with the target style combination to the input text to generate a transferred text in the target style combination. The style transfer language model comprises a cascaded language model configured to generate the transferred text. The cascaded language model is trained using a set of discriminator models corresponding to the set of target styles. The system provides, to the computing device, the transferred text.Type: ApplicationFiled: September 2, 2022Publication date: December 29, 2022Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
-
Patent number: 11487971Abstract: In some embodiments, a style transfer computing system generates a set of discriminator models corresponding to a set of styles based on a set of training datasets labeled for respective styles. The style transfer computing system further generates a style transfer language model for a target style combination that includes multiple target styles from the set of styles. The style transfer language model includes a cascaded language model and multiple discriminator models selected from the set of discriminator models. The style transfer computing system trains the style transfer language model to minimize a loss function containing a loss term for the cascaded language model and multiple loss terms for the multiple discriminator models. For a source sentence and a given target style combination, the style transfer computing system applies the style transfer language model on the source sentence to generate a target sentence in the given target style combination.Type: GrantFiled: October 16, 2020Date of Patent: November 1, 2022Assignee: Adobe Inc.Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
-
Patent number: 11322133Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate expressive audio for input texts based on a word-level analysis of the input text. For example, the disclosed systems can utilize a multi-channel neural network to generate a character-level feature vector and a word-level feature vector based on a plurality of characters of an input text and a plurality of words of the input text, respectively. In some embodiments, the disclosed systems utilize the neural network to generate the word-level feature vector based on contextual word-level style tokens that correspond to style features associated with the input text. Based on the character-level and word-level feature vectors, the disclosed systems can generate a context-based speech map. The disclosed systems can utilize the context-based speech map to generate expressive audio for the input text.Type: GrantFiled: July 21, 2020Date of Patent: May 3, 2022Assignee: Adobe Inc.Inventors: Sumit Shekhar, Gautam Choudhary, Abhilasha Sancheti, Shubhanshu Agarwal, E Santhosh Kumar, Rahul Saxena
-
Publication number: 20220121879Abstract: In some embodiments, a style transfer computing system generates a set of discriminator models corresponding to a set of styles based on a set of training datasets labeled for respective styles. The style transfer computing system further generates a style transfer language model for a target style combination that includes multiple target styles from the set of styles. The style transfer language model includes a cascaded language model and multiple discriminator models selected from the set of discriminator models. The style transfer computing system trains the style transfer language model to minimize a loss function containing a loss term for the cascaded language model and multiple loss terms for the multiple discriminator models. For a source sentence and a given target style combination, the style transfer computing system applies the style transfer language model on the source sentence to generate a target sentence in the given target style combination.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: Navita Goyal, Balaji Vasan Srinivasan, Anandha velu Natarajan, Abhilasha Sancheti
-
Publication number: 20220075965Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.Type: ApplicationFiled: November 18, 2021Publication date: March 10, 2022Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
-
Publication number: 20220028367Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate expressive audio for input texts based on a word-level analysis of the input text. For example, the disclosed systems can utilize a multi-channel neural network to generate a character-level feature vector and a word-level feature vector based on a plurality of characters of an input text and a plurality of words of the input text, respectively. In some embodiments, the disclosed systems utilize the neural network to generate the word-level feature vector based on contextual word-level style tokens that correspond to style features associated with the input text. Based on the character-level and word-level feature vectors, the disclosed systems can generate a context-based speech map. The disclosed systems can utilize the context-based speech map to generate expressive audio for the input text.Type: ApplicationFiled: July 21, 2020Publication date: January 27, 2022Inventors: Sumit Shekhar, Gautam Choudhary, Abhilasha Sancheti, Shubhanshu Agarwal, E Santhosh Kumar, Rahul Saxena
-
Publication number: 20220019909Abstract: Methods, systems, and computer storage media for providing command recommendations for an analysis-goal, using analytics system operations in an analytics systems. In operation, an analytics client is configured to provide an analytics interface for receiving a selection of analysis-goal information that corresponds to an analysis-goal model. A goal engine selects an analysis-goal based on the analysis-goal information. A command engine is configured to use the analysis-goal and goal-driven models to predict probable commands for the analysis goal. The command engine also selects a next command recommendation from the probable commands. The command engine generates additional command recommendation data based on a loss function fine tuner. The additional command recommendation data can include a goal orientation score that quantifies a degree to which a command aligns with the analysis-goal.Type: ApplicationFiled: July 14, 2020Publication date: January 20, 2022Inventors: Samarth Aggarwal, Rohin Garg, Bhanu Prakash Reddy Guda, Abhilasha Sancheti, Iftikhar Ahamath Burhanuddin
-
Patent number: 11210477Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.Type: GrantFiled: May 9, 2019Date of Patent: December 28, 2021Assignee: Adobe Inc.Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
-
Publication number: 20210158177Abstract: A method for recommending digital content includes: determining user preferences and a time horizon of a given user; determining a group for the given user based on the determined user preferences; determining a number of users of the determined group and a similarity of the users; applying information including the number of users, the similarity, and the time horizon to a model selection classifier to select one of a personalized model of the user and a group model of the determined group; and running the selected model to determine digital content to recommend.Type: ApplicationFiled: November 21, 2019Publication date: May 27, 2021Inventors: ABHILASHA SANCHETI, ZHENG WEN, IFTIKHAR AHAMATH BURHANUDDIN
-
Patent number: 10915577Abstract: A framework is provided for constructing enterprise-specific knowledge bases from enterprise-specific data that includes structured and unstructured data. Relationships between entities that match known relationships are identified for each of a plurality of tuples included in the structured data. Where possible, relationships between entities that match known relationships also are identified for tuples included in the unstructured data. If matching relationships between entities that cannot be identified for tuples in the unstructured data, extracted relationships are sequentially clustered to similar relationships and a relationship is assigned to the clustered tuples.Type: GrantFiled: March 22, 2018Date of Patent: February 9, 2021Assignee: ADOBE INC.Inventors: Balaji Vasan Srinivasan, Rajat Chaturvedi, Tanya Goyal, Paridhi Maheshwari, Anish Valliyath Monsy, Abhilasha Sancheti
-
Patent number: 10846617Abstract: Methods and systems are provided for providing recommendations from a recommendation system for an analytics system. A recommendation system can be trained using user intent and context. Such user intent can be determined using a user history of interaction with an analytics system. The user history can either be that of the user accessing the recommendation system or an exemplary user history to broaden the recommendations made by the recommendation system. Such context can be determined using context features within the analytics system. The trained recommendation system generated using user intent and context can provide analytics recommendations based on a current context of a user that predict the intent of the user.Type: GrantFiled: May 12, 2017Date of Patent: November 24, 2020Assignee: Adobe Inc.Inventors: Iftikhar Ahamath Burhanuddin, Shriram Venkatesh Shet Revankar, Kushal Satya, Biswarup Bhattacharya, Abhilasha Sancheti
-
Publication number: 20200356634Abstract: Embodiments of the present disclosure are directed to a system, methods, and computer-readable media for facilitating stylistic expression transfers in machine translation of source sequence data. Using integrated loss functions for style transfer along with content preservation and/or cross entropy, source sequence data is processed by an autoencoder trained to reduce loss values across the loss functions at each time step encoded for the source sequence data. The target sequence data generated by the autoencoder therefore exhibits reduced loss values for the integrated loss functions at each time step, thereby improving content preservation and providing for stylistic expression transfer.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Inventors: Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti
-
Publication number: 20190294732Abstract: A framework is provided for constructing enterprise-specific knowledge bases from enterprise-specific data that includes structured and unstructured data. Relationships between entities that match known relationships are identified for each of a plurality of tuples included in the structured data. Where possible, relationships between entities that match known relationships also are identified for tuples included in the unstructured data. If matching relationships between entities that cannot be identified for tuples in the unstructured data, extracted relationships are sequentially clustered to similar relationships and a relationship is assigned to the clustered tuples.Type: ApplicationFiled: March 22, 2018Publication date: September 26, 2019Inventors: BALAJI VASAN SRINIVASAN, RAJAT CHATURVEDI, TANYA GOYAL, PARIDHI MAHESHWARI, ANISH VALLIYATH MONSY, ABHILASHA SANCHETI