Patents by Inventor Gencheng Wu

Gencheng Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240320451
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.
    Type: Application
    Filed: June 6, 2024
    Publication date: September 26, 2024
    Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
  • Patent number: 12032922
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: July 9, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ji Li, Konstantin Seleskerov, Huey-Ru Tsai, Muin Barkatali Momin, Ramya Tridandapani, Sindhu Vigasini Jambunathan, Amit Srivastava, Derek Martin Johnson, Gencheng Wu, Sheng Zhao, Xinfeng Chen, Bohan Li
  • Patent number: 12026948
    Abstract: Techniques performed by a data processing system include establishing an online presentation session for conducting an online presentation, receiving first media streams comprising presentation content from the first computing device, receiving second media streams from the second computing devices of a subset of the plurality of participants, the second media streams including audio content, video content, or both of the subset of the plurality of participants, analyzing the first media streams using first machine learning models to generate feedback results, analyzing the set of second media streams to identify first reactions by the participants to obtain reaction information, automatically analyzing the feedback results and the reactions to identify discrepancies between the feedback results and the reactions, and automatically updating one or more parameters of the machine learning models based on the discrepancies to improve the suggestions for improving the online presentation.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: July 2, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Konstantin Seleskerov, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Gencheng Wu, Brittany Elizabeth Mederos
  • Patent number: 11909922
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Grant
    Filed: January 18, 2023
    Date of Patent: February 20, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20230156124
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Application
    Filed: January 18, 2023
    Publication date: May 18, 2023
    Inventors: Ji LI, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Konstantin SELESKEROV, Gencheng WU
  • Patent number: 11570307
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 31, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20220366153
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.
    Type: Application
    Filed: May 12, 2021
    Publication date: November 17, 2022
    Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
  • Publication number: 20220138470
    Abstract: Techniques performed by a data processing system for facilitating an online presentation session include establishing an online presentation session for conducting an online presentation for a first computing device of a presenter and a plurality of second computing devices of a plurality of participants, receiving a set of first media streams comprising presentation content from the first computing device, receiving a set of second media streams from the second computing devices of a first subset of the plurality of participants, the set of second media streams including audio content, video content, or both of first subset of the plurality of participants, analyzing the set of first media streams using one or more first machine learning models n to generate a set of first feedback results, analyzing the set of second media streams using one or more second machine learning models to identify a set of first reactions by the participants to obtain first reaction information, automatically analyzing the set of
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Konstantin SELESKEROV, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Gencheng WU, Brittany Elizabeth MEDEROS
  • Publication number: 20220038580
    Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.
    Type: Application
    Filed: August 3, 2020
    Publication date: February 3, 2022
    Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
  • Publication number: 20190339820
    Abstract: Systems, methods, and software are disclosed herein to predict and display menu items based on a prediction of the next user-actions. In an implementation, a user interface is displayed to the application. The user interface comprises menu items displayed in sub-menus of a menu. In response to an occurrence of a user-action associated with a given item of a given sub-menu of the sub-menus, a set of user-actions likely to occur next is identified based on an identity of the user-action. A subset of the menu items is then identified corresponding to the set of the user-actions likely to occur next. The subset of the menu items is then displayed in the user interface.
    Type: Application
    Filed: May 2, 2018
    Publication date: November 7, 2019
    Inventors: Gencheng Wu, Lishan Yu