Patents by Inventor Amit Srivastava
Amit Srivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12242491Abstract: A system and method and for retrieving assets from a personalized asset library includes receiving a search query for searching for assets in one or more asset libraries, the one or more asset libraries including a personalized asset library; encoding the search query into embedding representations via a trained query representation machine-learning (ML) model; comparing, via a matching unit, the query embedding representations to a plurality of asset representations, each of the plurality of asset representations being a representation of one of the plurality of candidate assets; identifying, based on the comparison, at least one of the plurality of the candidate assets as a search result for the search query; and providing the identified plurality of candidate assets for display as the search result. The plurality of asset representations for the one or more assets in the personalized content library are generated automatically without human labeling.Type: GrantFiled: April 8, 2022Date of Patent: March 4, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Ji Li, Dachuan Zhang, Amit Srivastava, Adit Krishnan
-
Patent number: 12236205Abstract: A data processing system for generating training data for a multilingual NLP model implements obtaining a corpus including first and second content items. The first content items are English-language textual content, and the second content items are translations of the first content items in one or more non-English target languages. The system further implements selecting a first content item from the first content items, generating a plurality of candidate labels for the first content item by analyzing the first content item with a plurality of first English-language NLP models, selecting a first label from the plurality of candidate labels, generating first training data by associating the first label with the first content item, generating second training data by associating the first label with a second content item of the second content items, and training a pretrained multilingual NLP model with the first training data and the second training data.Type: GrantFiled: December 22, 2020Date of Patent: February 25, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Ji Li, Amit Srivastava
-
Publication number: 20250053797Abstract: An apparatus to facilitate compute optimization is disclosed. The apparatus includes a at least one processor to perform operations to implement a neural network and compute logic to accelerate neural network computations.Type: ApplicationFiled: August 22, 2024Publication date: February 13, 2025Applicant: Intel CorporationInventors: Amit Bleiweiss, Abhishek Venkatesh, Gokce Keskin, John Gierach, Oguz Elibol, Tomer Bar-On, Huma Abidi, Devan Burke, Jaikrishnan Menon, Eriko Nurvitadhi, Pruthvi Gowda Thorehosur Appajigowda, Travis T. Schluessler, Dhawal Srivastava, Nishant Patel, Anil Thomas
-
Patent number: 12206647Abstract: Disclosed are various examples for securing enterprise resources using a virtual private network. At least one computing device that can authenticate a client device for a virtual private network (VPN) connection based on a first device identifier received from the client device and a second device identifier received from a remote management service. The at least one computing device can determine that a network event associated with the client device has been observed and execute a machine learning routine to identify a pattern of access for the client device. A network access anomaly is determined in response to a network interaction of the client device deviating from the pattern of access for the client device. A remedial action is performed based on an anomaly type associated with the network access anomaly.Type: GrantFiled: May 25, 2022Date of Patent: January 21, 2025Assignee: Omnissa, LLCInventors: Arjun Kochhar, Suman Aluvala, Amit Yadav, Aman Srivastava
-
Patent number: 12159621Abstract: A computing apparatus comprises one or more computer readable storage media, one or more processors operatively coupled with the one or more computer readable storage media, and program instructions stored on the one or more computer readable storage media. The program instructions, when executed by the one or more processors, direct the computing apparatus to at least generate an audio recording of speech, extract features from the audio recording indicative of vocal patterns in the speech, determine a register classification of the speech based at least on the features, and display an indication of the register classification in a user interface.Type: GrantFiled: June 7, 2022Date of Patent: December 3, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Huakai Liao, Ana Parra, Gaurav Vinayak Tendolkar, Amit Srivastava, Siliang Kang
-
Publication number: 20240378393Abstract: An example embodiment may involve obtaining a chat dialog including a first question and a first action; obtaining, from a natural language model, an utterance based on the first question and an input parameter associated with the utterance; obtaining, from the natural language model, program code for performing the first action on a computing system, wherein the program code is based on a textual description of the first action and specification of a variable defined by the computing system in which to store the input parameter; and generating a virtual agent to perform the first action on the computing system using the program code.Type: ApplicationFiled: May 10, 2023Publication date: November 14, 2024Inventors: Amine El Hattami, Christopher Pal, Amit Srivastava
-
Patent number: 12124812Abstract: A data processing system implements obtaining first textual content in a first language from a first client device; determining that the first language is supported by a first machine learning model; obtaining a guard list of prohibited terms associated with the first language; determining that the textual content does not include one or more prohibited terms associated based on the guard list; providing the first textual content as an input to the first machine learning model responsive to the textual content not including the one or more prohibited terms; analyzing the first textual content with the first machine learning model to obtain a first content recommendation; obtaining a first content recommendation policy that identifies content associated with the first language that may not be provided as a content recommendation; determining that the first content recommendation is not prohibited; and providing the first content recommendation to the first client device.Type: GrantFiled: October 26, 2021Date of Patent: October 22, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Ji Li, Amit Srivastava, Xingxing Zhang, Furu Wei
-
Publication number: 20240320451Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: June 6, 2024Publication date: September 26, 2024Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Patent number: 12047704Abstract: The present disclosure relates to application of artificial intelligence (AI) processing that adapts one or more video feeds relative to presentation content. Trained AI processing automatically generates a combined representation comprising one or more video feeds and presentation content. An exemplary combined representation is the result of contextual analysis by one or more trained AI models that are adapted to consider how to adapt presentation of a video feed relative to displayable presentation content (or visa-versa). A combined representation of one or more video feeds and presentation content is automatically generated (and subsequently rendered) based on a result of contextual evaluation of data associated with a video feed and data attributes of presentation content. A combined representation may comprise a modification of the one or more video feeds, objects of presentation content or a combination thereof.Type: GrantFiled: August 26, 2021Date of Patent: July 23, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Fatima Zohra Daha, Robert Fernand Gordan, Amit Srivastava, Joshua Alexander Doctors
-
Patent number: 12045279Abstract: A system and method and for retrieving one or more visual assets includes receiving a search query for the one or more visual assets, the search query including textual data, encoding the textual data into one or more text embedding representations via a trained text representation machine-learning (ML) model, transmitting the one or more text embedding representations to a matching and selection unit, providing visual embedding representations of one or more visual assets to the matching and selection unit, comparing, by the matching and selection unit, the one or more text embedding representations to the visual embedding representations to identify one or more visual asset search results, and providing the one or more visual asset search results for display.Type: GrantFiled: November 30, 2021Date of Patent: July 23, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Ji Li, Adit Krishnan, Amit Srivastava, Han Hu, Qi Dai, Yixuan Wei, Yue Cao
-
Patent number: 12032922Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: GrantFiled: May 12, 2021Date of Patent: July 9, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Konstantin Seleskerov, Huey-Ru Tsai, Muin Barkatali Momin, Ramya Tridandapani, Sindhu Vigasini Jambunathan, Amit Srivastava, Derek Martin Johnson, Gencheng Wu, Sheng Zhao, Xinfeng Chen, Bohan Li
-
Patent number: 12026948Abstract: Techniques performed by a data processing system include establishing an online presentation session for conducting an online presentation, receiving first media streams comprising presentation content from the first computing device, receiving second media streams from the second computing devices of a subset of the plurality of participants, the second media streams including audio content, video content, or both of the subset of the plurality of participants, analyzing the first media streams using first machine learning models to generate feedback results, analyzing the set of second media streams to identify first reactions by the participants to obtain reaction information, automatically analyzing the feedback results and the reactions to identify discrepancies between the feedback results and the reactions, and automatically updating one or more parameters of the machine learning models based on the discrepancies to improve the suggestions for improving the online presentation.Type: GrantFiled: October 30, 2020Date of Patent: July 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Konstantin Seleskerov, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Gencheng Wu, Brittany Elizabeth Mederos
-
Patent number: 12020701Abstract: Methods, systems, and computer programs are presented for detecting a mission changes in a conversation. A user utterance from a user device is received. The user utterance is part of a conversation with an intelligent assistant. The conversation includes preceding user utterances in pursuit of a first mission. It is determined that the user utterance indicates a mission change from the first mission to a second mission based on an application of a machine-learned model to the user utterance and the preceding user utterances. The machine-learned model has been trained repeatedly with past utterances of other users over a time period, the determining based on a certainty of the indication satisfying a certainty threshold. Responsive to the determining that the user utterance indicates the mission change from the first mission to a second mission, a reply to the user utterance is generated to further the second mission rather than the first mission.Type: GrantFiled: September 30, 2021Date of Patent: June 25, 2024Assignee: EBAY INC.Inventors: Stefan Schoenmackers, Amit Srivastava, Lawrence William Colagiovanni, Sanjika Hewavitharana, Ajinkya Gorakhnath Kale, Vinh Khuc
-
Patent number: 12020683Abstract: A real-time name mispronunciation detection feature can enable a user to receive instant feedback anytime they have mispronounced another person's name in an online meeting. The feature can receive audio input of a speaker and obtain a transcript of the audio input; identify a name from text of the transcript based on names of meeting participants; and extract a portion of the audio input corresponding to the name identified from the text of the transcript. The feature can obtain a reference pronunciation for the name using a user identifier associated with the name; and can obtain a pronunciation score for the name based on a comparison between the reference pronunciation for the name and the portion of the audio input corresponding to the name. The feature can then determine whether the pronunciation score is below a threshold; and in response, notify the speaker of a pronunciation error.Type: GrantFiled: October 28, 2021Date of Patent: June 25, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Tapan Bohra, Akshay Mallipeddi, Amit Srivastava, Ana Karen Parra
-
Patent number: 12001514Abstract: The present disclosure relates to processing operations that execute image classification training for domain-specific traffic, where training operations are entirely compliant with data privacy regulations and policies. Image classification model training, as described herein, is configured to classify meaningful image categories in domain-specific scenarios where there is unknown data traffic and strict data compliance requirements that result in privacy-limited image data sets. Iterative image classification training satisfies data compliance requirements through a combination of online image classification training and offline image classification training. This results in tuned image recognition classifiers that have improved accuracy and efficiency over general image recognition classifiers when working with domain-specific data traffic. One or more image recognition classifiers are independently trained and tuned to detect an image class for image classification.Type: GrantFiled: October 18, 2022Date of Patent: June 4, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Youjun Liu, Amit Srivastava
-
Publication number: 20240169151Abstract: Understanding emojis in the context of online experiences is described. In at least some embodiments, text input is received and a vector representation of the text input is computed. Based on the vector representation, one or more emojis that correspond to the vector representation of the text input are ascertained and a response is formulated that includes at least one of the one or more emojis. In other embodiments, input from a client machine is received. The input includes at least one emoji. A computed vector representation of the emoji is used to look for vector representations of words or phrases that are close to the computed vector representation of the emoji. At least one of the words or phrases is selected and at least one task is performed using the selected word(s) or phrase(s).Type: ApplicationFiled: January 31, 2024Publication date: May 23, 2024Applicant: eBay Inc.Inventors: Dishan Gupta, Ajinkya Gorakhnath Kale, Stefan Boyd Schoenmackers, Amit Srivastava
-
Publication number: 20240095490Abstract: Aspect pre-selection techniques using machine learning are described. In one example, an artificial assistant system is configured to implement a chat bot. A user then engages in a first natural-language conversation. As part of this first natural-language conversation, a communication is generated by the chat bot to prompt the user to specify an aspect of a category that is a subject of a first natural-language conversation and user data is received in response. Data that describes this first natural-language conversation is used to train a model using machine learning. Data is then received by the chat bot as part of a second natural-language conversation. This data, from the second natural-language conversation, is processed using the model as part of machine learning to generate the second search query to include the aspect of the category automatically and without user intervention.Type: ApplicationFiled: November 29, 2023Publication date: March 21, 2024Applicant: eBay Inc.Inventors: Farah Abdallah, Robert Enyedi, Amit Srivastava, Elaine Lee, Braddock Craig Gaskill, Tomer Lancewicki, Xinyu Zhang, Jayanth Vasudevan, Dominique Jean Bouchon
-
Patent number: 11928428Abstract: Understanding emojis in the context of online experiences is described. In at least some embodiments, text input is received and a vector representation of the text input is computed. Based on the vector representation, one or more emojis that correspond to the vector representation of the text input are ascertained and a response is formulated that includes at least one of the one or more emojis. In other embodiments, input from a client machine is received. The input includes at least one emoji. A computed vector representation of the emoji is used to look for vector representations of words or phrases that are close to the computed vector representation of the emoji. At least one of the words or phrases is selected and at least one task is performed using the selected word(s) or phrase(s).Type: GrantFiled: March 13, 2023Date of Patent: March 12, 2024Assignee: eBay Inc.Inventors: Dishan Gupta, Ajinkya Gorakhnath Kale, Stefan Boyd Schoenmackers, Amit Srivastava
-
Patent number: 11909922Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: GrantFiled: January 18, 2023Date of Patent: February 20, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Publication number: 20240050572Abstract: The present disclosure provides polysorbate 20 compositions with particular fatty acid ester concentrations. In some embodiments, they may be used in pharmaceutical formulations, for example, to improve stability.Type: ApplicationFiled: October 26, 2023Publication date: February 15, 2024Applicant: Genentech, Inc.Inventors: Sandeep Yadav, Nidhi Doshi, Tomanna Shobha, Anthony Tomlinson, Amit Srivastava