Patents by Inventor Derek Martin Johnson

Derek Martin Johnson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11909922
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Grant
    Filed: January 18, 2023
    Date of Patent: February 20, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20230156124
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Application
    Filed: January 18, 2023
    Publication date: May 18, 2023
    Inventors: Ji LI, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Konstantin SELESKEROV, Gencheng WU
  • Patent number: 11570307
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 31, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20220366153
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.
    Type: Application
    Filed: May 12, 2021
    Publication date: November 17, 2022
    Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
  • Patent number: 11494396
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving a user query for creating content in a content generation application and determining an action from an intent of the user query. A prompt is generated based on the action and provided to a natural language generation model. In response to the prompt, output is received from the natural language generation model. Response content is generated based on the output in a format compatible with the content generation application. At least some of the response content is displayed to the user. The user can choose to keep, edit, or discard the response content. The user can iterate with additional queries until the content document reflects the user's desired content.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: November 8, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ji Li, Amit Srivastava, Muin Barkatali Momin, Muqi Li, Emily Lauren Tohir, SivaPriya Kalyanaraman, Derek Martin Johnson
  • Publication number: 20220229832
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving a user query for creating content in a content generation application and determining an action from an intent of the user query. A prompt is generated based on the action and provided to a natural language generation model. In response to the prompt, output is received from the natural language generation model. Response content is generated based on the output in a format compatible with the content generation application. At least some of the response content is displayed to the user. The user can choose to keep, edit, or discard the response content. The user can iterate with additional queries until the content document reflects the user's desired content.
    Type: Application
    Filed: January 19, 2021
    Publication date: July 21, 2022
    Inventors: Ji LI, Amit SRIVASTAVA, Muin Barkatali MOMIN, Muqi LI, Emily Lauren TOHIR, SivaPriya KALYANARAMAN, Derek Martin JOHNSON
  • Patent number: 11341331
    Abstract: An intelligent speech assistant receives information collected while a user is speaking. The information can comprise speech data, vision data, or both, where the speech data is from the user speaking and the vision data is of the user while speaking. The assistant evaluates the speech data against a script which can contain information that the user should speak, information that the user should not speak, or both. The assistant collects instances where the user utters phrases that match the script or instances where the user utters phrases that do not match the script, depending on whether phases should or should not be spoken. The assistant evaluates vision data to identify gestures, facial expressions, and/or emotions of the user. Instances where the gestures, facial expressions, and/or emotions are not appropriate to the context are flagged. Real-time prompts and/or a summary is presented to the user as feedback.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: May 24, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Huakai Liao, Priyanka Vikram Sinha, Kevin Dara Khieu, Derek Martin Johnson, Siliang Kang, Huey-Ru Tsai, Amit Srivastava
  • Publication number: 20220138470
    Abstract: Techniques performed by a data processing system for facilitating an online presentation session include establishing an online presentation session for conducting an online presentation for a first computing device of a presenter and a plurality of second computing devices of a plurality of participants, receiving a set of first media streams comprising presentation content from the first computing device, receiving a set of second media streams from the second computing devices of a first subset of the plurality of participants, the set of second media streams including audio content, video content, or both of first subset of the plurality of participants, analyzing the set of first media streams using one or more first machine learning models n to generate a set of first feedback results, analyzing the set of second media streams using one or more second machine learning models to identify a set of first reactions by the participants to obtain first reaction information, automatically analyzing the set of
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Konstantin SELESKEROV, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Gencheng WU, Brittany Elizabeth MEDEROS
  • Publication number: 20220038580
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Application
    Filed: August 3, 2020
    Publication date: February 3, 2022
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20210103635
    Abstract: An intelligent speech assistant receives information collected while a user is speaking. The information can comprise speech data, vision data, or both, where the speech data is from the user speaking and the vision data is of the user while speaking. The assistant evaluates the speech data against a script which can contain information that the user should speak, information that the user should not speak, or both. The assistant collects instances where the user utters phrases that match the script or instances where the user utters phrases that do not match the script, depending on whether phases should or should not be spoken. The assistant evaluates vision data to identify gestures, facial expressions, and/or emotions of the user. Instances where the gestures, facial expressions, and/or emotions are not appropriate to the context are flagged. Real-time prompts and/or a summary is presented to the user as feedback.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Huakai LIAO, Priyanka Vikram SINHA, Kevin Dara KHIEU, Derek Martin JOHNSON, Siliang KANG, Huey-Ru TSAI, Amit SRIVASTAVA
  • Publication number: 20210097133
    Abstract: A system and method for personalizing a display of a recommendation in a user interface element of an application is described. The system accesses application activities of a user of the application. A user preference is formed based on the application activities. The system identifies a context of a current activity of the application and generates a content recommendation in the application based on the context of the current activity of the application and the user preference.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Huakai Liao, Debapriya Pal, Sun Mao, Erik Thomas Oveson, Huitian Jiao, Daniel M Cheung, Derek Martin Johnson, Bogdan Popp
  • Patent number: 10754508
    Abstract: In a non-limiting example of the present disclosure, an exemplary table of contents slide may be displayed for a slide deck of a presentation program. The table of contents slide may comprise one or more sections of grouped slides for the slide deck. A selection of a section link may be received. The section link links the table of contents slide to a section of grouped slides. An exemplary presentation program may navigate the slide deck to a first slide of the section based on the received selection. When the navigation of the section is completed, the presentation program returns the slide deck to one of: the table of contents slide and the first slide of the section. Other examples described relate to creation and rendering of an exemplary table of contents slide and/or section links within an exemplary table of contents slide of a presentation program.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: August 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Jay Gilmore, Kerry Young, Lei Shi, Charles Cummins, Lauren Michelle Janas, Derek Martin Johnson, Paul Scuderi
  • Publication number: 20170220217
    Abstract: In a non-limiting example of the present disclosure, an exemplary table of contents slide may be displayed for a slide deck of a presentation program. The table of contents slide may comprise one or more sections of grouped slides for the slide deck. A selection of a section link may be received. The section link links the table of contents slide to a section of grouped slides. An exemplary presentation program may navigate the slide deck to a first slide of the section based on the received selection. When the navigation of the section is completed, the presentation program returns the slide deck to one of: the table of contents slide and the first slide of the section. Other examples described relate to creation and rendering of an exemplary table of contents slide and/or section links within an exemplary table of contents slide of a presentation program.
    Type: Application
    Filed: October 24, 2016
    Publication date: August 3, 2017
    Inventors: Michael Jay Gilmore, Kerry Young, Lei Shi, Charles Cummins, Lauren Michelle Janas, Derek Martin Johnson, Paul Scuderi
  • Publication number: 20170220232
    Abstract: Technology is disclosed herein that enhances the user experience with presentation programs and the operational aspects of such programs. In an implementation, a presentation program includes a hierarchy of parent slides and child slides in a collection of slides. Navigating from a parent slide to a child slide triggers a contextual zoom-in transition into the child slide. Navigating back to the parent slide from the child slide triggers a contextual zoom-out transition to the parent slide. Other non-limiting examples describe smart slide functionality of an exemplary presentation program. A smart slide is a slide of a slide deck that comprises one or more slide links, which provide an active link to another slide of the slide deck.
    Type: Application
    Filed: October 24, 2016
    Publication date: August 3, 2017
    Inventors: Michael Jay Gilmore, Kerry Young, Lei Shi, Alexandre Gueniot, Derek Martin Johnson, Jing Zhao, Charles Cummins, Aviral Ajit, Paul Scuderi