Patents by Inventor Amit Srivastava

Amit Srivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11455466
    Abstract: A method and system for providing an application-specific embedding for an entire text-to-content suggestions service is disclosed. The method includes accessing a dataset containing unlabeled training data collected from an application, the unlabeled training data being collected under user privacy constraints, applying an unsupervised ML model to the dataset to generate a pretrained embedding; and utilizing the pretrained embedding to train the text-to-content suggestion ML model utilized by the application.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: September 27, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xingxing Zhang, Ji Li, Furu Wei, Ming Zhou, Amit Srivastava
  • Patent number: 11429787
    Abstract: Method and system for training a text-to-content suggestion ML model include accessing a dataset containing unlabeled training data collected from an application, the unlabeled training data being collected under user privacy constraints, applying an ML model to the dataset to generate a pretrained embedding, and applying a supervised ML model to a labeled dataset to train the text-to-content suggestion ML model utilized by the application by utilizing the pretrained embedding generated by the supervised ML model.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: August 30, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ji Li, Xingxing Zhang, Furu Wei, Ming Zhou, Amit Srivastava
  • Patent number: 11414396
    Abstract: Disclosed herein is a process of making a compound of formula I The compound of formula I is an inhibitor of MEK and thus can be used to treat cancer.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: August 16, 2022
    Assignees: Exelixis, Inc., Genentech, Inc.
    Inventors: Sriram Naganathan, Nathan Guz, Matthew Pfeiffer, C. Gregory Sowell, Tracy Bostick, Jason Yang, Amit Srivastava, Neel Kumar Anand
  • Publication number: 20220229832
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving a user query for creating content in a content generation application and determining an action from an intent of the user query. A prompt is generated based on the action and provided to a natural language generation model. In response to the prompt, output is received from the natural language generation model. Response content is generated based on the output in a format compatible with the content generation application. At least some of the response content is displayed to the user. The user can choose to keep, edit, or discard the response content. The user can iterate with additional queries until the content document reflects the user's desired content.
    Type: Application
    Filed: January 19, 2021
    Publication date: July 21, 2022
    Inventors: Ji LI, Amit SRIVASTAVA, Muin Barkatali MOMIN, Muqi LI, Emily Lauren TOHIR, SivaPriya KALYANARAMAN, Derek Martin JOHNSON
  • Publication number: 20220198157
    Abstract: A data processing system for generating training data for a multilingual NLP model implements obtaining a corpus including first and second content items, where the first content items are English-language textual content, and the second content items are translations of the first content items in one or more non-English target languages; selecting a first content item from the plurality of first content items; generating a plurality of candidate labels for the first content item by analyzing the first content item with a plurality of first English-language NLP models; selecting a first label from the plurality of candidate labels; generating first training data by associating the first label with the first content item; generating second training data by associating the first label with a second content item of the second content items; and training a pretrained multilingual NLP model with the first training data and the second training data.
    Type: Application
    Filed: December 22, 2020
    Publication date: June 23, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ji LI, Amit SRIVASTAVA
  • Patent number: 11341331
    Abstract: An intelligent speech assistant receives information collected while a user is speaking. The information can comprise speech data, vision data, or both, where the speech data is from the user speaking and the vision data is of the user while speaking. The assistant evaluates the speech data against a script which can contain information that the user should speak, information that the user should not speak, or both. The assistant collects instances where the user utters phrases that match the script or instances where the user utters phrases that do not match the script, depending on whether phases should or should not be spoken. The assistant evaluates vision data to identify gestures, facial expressions, and/or emotions of the user. Instances where the gestures, facial expressions, and/or emotions are not appropriate to the context are flagged. Real-time prompts and/or a summary is presented to the user as feedback.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: May 24, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Huakai Liao, Priyanka Vikram Sinha, Kevin Dara Khieu, Derek Martin Johnson, Siliang Kang, Huey-Ru Tsai, Amit Srivastava
  • Publication number: 20220147702
    Abstract: The present disclosure applies trained artificial intelligence (AI) processing adapted to automatically generating transformations of formatted templates. Pre-existing formatted templates (e.g., slide-based presentation templates) are leveraged by the trained AI processing to automatically generate a plurality of high-quality template transformations. In transforming a formatted template, the trained AI processing not only generates feature transformation of objects thereof but may also provide style transformations where attributes associated with a presentation theme may be modified for a formatted template or set of formatted templates. The trained AI processing is novel in that it is tailored for analysis of feature data of a specific type of formatted template.
    Type: Application
    Filed: November 11, 2020
    Publication date: May 12, 2022
    Inventors: Ji LI, Amit SRIVASTAVA, Mingxi CHENG
  • Publication number: 20220141532
    Abstract: Techniques performed by a data processing system for facilitating an online presentation session include establishing the session for a first computing device of a presenter and a plurality of second computing devices of a plurality of participants, receiving a set of first media streams comprising presentation content from the first computing device, sending a set of second media streams to the plurality of second computing devices, receiving a set of third media streams from the computing devices of a first subset of the plurality of participants including video content of first subset of the participants captured by the respective computing devices of the first subset of participants, analyzing the set of third media streams to identify a set of first reactions by the first subset participants to obtain first reaction information, determining first graphical representation information representing the first reaction information, and sending a fourth media stream to cause the first computing device to displ
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ji LI, Robert Fernand GORDAN, Nicolas HIGUERA, Amit SRIVASTAVA
  • Publication number: 20220138470
    Abstract: Techniques performed by a data processing system for facilitating an online presentation session include establishing an online presentation session for conducting an online presentation for a first computing device of a presenter and a plurality of second computing devices of a plurality of participants, receiving a set of first media streams comprising presentation content from the first computing device, receiving a set of second media streams from the second computing devices of a first subset of the plurality of participants, the set of second media streams including audio content, video content, or both of first subset of the plurality of participants, analyzing the set of first media streams using one or more first machine learning models n to generate a set of first feedback results, analyzing the set of second media streams using one or more second machine learning models to identify a set of first reactions by the participants to obtain first reaction information, automatically analyzing the set of
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Konstantin SELESKEROV, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Gencheng WU, Brittany Elizabeth MEDEROS
  • Publication number: 20220111054
    Abstract: The present disclosure provides polysorbate 20 compositions with particular fatty acid ester concentrations. In some embodiments, they may be used in pharmaceutical formulations, for example, to improve stability.
    Type: Application
    Filed: September 23, 2021
    Publication date: April 14, 2022
    Applicant: Genentech, Inc.
    Inventors: Sandeep Yadav, Nidhi Doshi, Tamanna Shobha, Anthony Tomlinson, Amit Srivastava
  • Patent number: 11289091
    Abstract: Examples are disclosed that relate to methods and computing devices for providing voice-based assistance during a presentation. In one example, a method comprises receiving content of a slide deck, processing the content of the slide deck, and populating a contextual knowledge graph based on the content of the slide deck. A voice input is received from a presenter. Using the knowledge graph, the voice input is analyzed to determine an action to be performed by a presentation program during the presentation. The action is translated into one or more commands executable by the presentation program to perform the action, and the one or more commands are sent to a client device executing the presentation program.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 29, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Amit Srivastava, Dachuan Zhang
  • Patent number: 11270059
    Abstract: A textual user input is received and a plurality of different text-to-content models are run on the textual user input. A selection system attempts to identify a suggested content item, based upon the outputs of the text-to-content models. The selection system first attempts to generate a completed suggestion based on outputs from a single text-to-content model. It then attempts to mix the outputs of the text-to-content models to obtain a completed content suggestion.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: March 8, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ji Li, Xiaozhi Yu, Gregory Alexander DePaul, Youjun Liu, Amit Srivastava
  • Publication number: 20220038580
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Application
    Filed: August 3, 2020
    Publication date: February 3, 2022
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20220020375
    Abstract: Methods, systems, and computer programs are presented for detecting a mission changes in a conversation. A user utterance from a user device is received. The user utterance is part of a conversation with an intelligent assistant. The conversation includes preceding user utterances in pursuit of a first mission. It is determined that the user utterance indicates a mission change from the first mission to a second mission based on an application of a machine-learned model to the user utterance and the preceding user utterances. The machine-learned model has been trained repeatedly with past utterances of other users over a time period, the determining based on a certainty of the indication satisfying a certainty threshold. Responsive to the determining that the user utterance indicates the mission change from the first mission to a second mission, a reply to the user utterance is generated to further the second mission rather than the first mission.
    Type: Application
    Filed: September 30, 2021
    Publication date: January 20, 2022
    Inventors: Stefan Schoenmackers, Amit Srivastava, Lawrence William Colagiovanni, Sanjika Hewavitharana, Ajinkya Gorakhnath Kale, Vinh Khuc
  • Publication number: 20220007175
    Abstract: A Wi-Fi access point device (APD) includes a controller, a radio, and a memory. The memory contains instructions for establishing a programmed secure Wi-Fi onboarding SSID with the client device with connection to the external network. The controller is configured to instruct the radio to broadcast the open Wi-Fi onboarding SSID for a predetermined period of time. The controller is also configured to: instruct the radio to broadcast an established programmed secure Wi-Fi onboarding SSID; onboard the Wi-Fi APD to the external network, based on information communicated between the Wi-Fi client device and the Wi-Fi APD over the established programmed secure Wi-Fi onboarding SSID; and instruct the radio to stop the broadcast of the open Wi-Fi onboarding SSID at the earlier of a termination of the predetermined time period and the onboarding of the Wi-Fi APD to the external network.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 6, 2022
    Inventors: Sathish Arumugam CHANDRASEKARAN, Muralidharan NARAYANAN, Jalagandeswari GANAPATHY, Amit SRIVASTAVA
  • Publication number: 20220006883
    Abstract: In one embodiment, an apparatus includes a unified adapter layer and a first bus controller. The unified adapter layer is to receive a first host data packet packetized in accordance with a host protocol and directed to a first device and decode the first host data packet to generate first and second data elements based on the first host data packet, the first device associated with a first device protocol. The first bus controller is coupled to the unified adapter layer and is to be coupled to the first device via a first bus. The first bus controller is to packetize the first data element in accordance with the first device protocol to generate a first device data packet for transmission to the first device in accordance with the first device protocol via the first bus and adjust a bus controller parameter based in part on the second data element. Other embodiments are described and claimed.
    Type: Application
    Filed: September 16, 2021
    Publication date: January 6, 2022
    Inventors: Amit Srivastava, Matthew A. Schnoor, Rajesh Bhaskar, Aruni P. Nelson, Enrico David Carrieri, Devon Worrell
  • Patent number: 11205418
    Abstract: Examples of the present disclosure describe systems and methods for detecting monotone speech. In aspects, audio data provided by a user may be received a device. Pitch values may be calculated and/or extracted from the audio data. The non-zero pitch values may be divided into clusters. For each cluster, a Pitch Variation Quotient (PVQ) value may be calculated. The weighted average of PVQ values across the clusters may be calculated and compared to a threshold for determining monotone speech. Based on the comparison, the audio data may be classified as monotone or non-monotone and an indication of the classification may be provided to the user in real-time via a user interface. Upon the completion of the audio session in which the audio data is received, feedback for the audio data may be provided to the user via the user interface.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: December 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John Christian Leone, Amit Srivastava
  • Publication number: 20210390365
    Abstract: Aspect pre-selection techniques using machine learning are described. In one example, an artificial assistant system is configured to implement a chat bot. A user then engages in a first natural-language conversation. As part of this first natural-language conversation, a communication is generated by the chat bot to prompt the user to specify an aspect of a category that is a subject of a first natural-language conversation and user data is received in response. Data that describes this first natural-language conversation is used to train a model using machine learning. Data, is then be received by the chat bot as part of a second natural-language conversation. This data, from the second natural-language conversation, is processed using the model as part of machine learning to generate the second search query to include the aspect of the category automatically and without user intervention.
    Type: Application
    Filed: August 31, 2021
    Publication date: December 16, 2021
    Applicant: eBay Inc.
    Inventors: Farah Abdallah, Robert Enyedi, Amit Srivastava, Elaine Lee, Braddock Craig Gaskill, Tomer Lancewicki, Xinyu Zhang, Jayanth Vasudevan, Dominique Jean Bouchon
  • Publication number: 20210358476
    Abstract: Examples of the present disclosure describe systems and methods for detecting monotone speech. In aspects, audio data provided by a user may be received a device. Pitch values may be calculated and/or extracted from the audio data. The non-zero pitch values may be divided into clusters. For each cluster, a Pitch Variation Quotient (PVQ) value may be calculated. The weighted average of PVQ values across the clusters may be calculated and compared to a threshold for determining monotone speech. Based on the comparison, the audio data may be classified as monotone or non-monotone and an indication of the classification may be provided to the user in real-time via a user interface. Upon the completion of the audio session in which the audio data is received, feedback for the audio data may be provided to the user via the user interface.
    Type: Application
    Filed: May 13, 2020
    Publication date: November 18, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: John Christian Leone, Amit Srivastava
  • Patent number: 11170769
    Abstract: Methods, systems, and computer programs are presented for detecting a mission changes in a conversation. A user utterance from a user device is received. The user utterance is part of a conversation with an intelligent assistant. The conversation includes preceding user utterances in pursuit of a first mission. It is determined that the user utterance indicates a mission change from the first mission to a second mission based on an application of a machine-learned model to the user utterance and the preceding user utterances. The machine-learned model has been trained repeatedly with past utterances of other users over a time period, the determining based on a certainty of the indication satisfying a certainty threshold. Responsive to the determining that the user utterance indicates the mission change from the first mission to a second mission, a reply to the user utterance is generated to further the second mission rather than the first mission.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: November 9, 2021
    Assignee: eBay Inc.
    Inventors: Stefan Schoenmackers, Amit Srivastava, Lawrence William Colagiovanni, Sanjika Hewavitharana, Ajinkya Gorakhnath Kale, Vinh Khuc