Patents by Inventor Gencheng Wu
Gencheng Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240320451Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: June 6, 2024Publication date: September 26, 2024Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Patent number: 12032922Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: GrantFiled: May 12, 2021Date of Patent: July 9, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Konstantin Seleskerov, Huey-Ru Tsai, Muin Barkatali Momin, Ramya Tridandapani, Sindhu Vigasini Jambunathan, Amit Srivastava, Derek Martin Johnson, Gencheng Wu, Sheng Zhao, Xinfeng Chen, Bohan Li
-
Patent number: 12026948Abstract: Techniques performed by a data processing system include establishing an online presentation session for conducting an online presentation, receiving first media streams comprising presentation content from the first computing device, receiving second media streams from the second computing devices of a subset of the plurality of participants, the second media streams including audio content, video content, or both of the subset of the plurality of participants, analyzing the first media streams using first machine learning models to generate feedback results, analyzing the set of second media streams to identify first reactions by the participants to obtain reaction information, automatically analyzing the feedback results and the reactions to identify discrepancies between the feedback results and the reactions, and automatically updating one or more parameters of the machine learning models based on the discrepancies to improve the suggestions for improving the online presentation.Type: GrantFiled: October 30, 2020Date of Patent: July 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Konstantin Seleskerov, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Gencheng Wu, Brittany Elizabeth Mederos
-
Patent number: 11909922Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: GrantFiled: January 18, 2023Date of Patent: February 20, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Publication number: 20230156124Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Inventors: Ji LI, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Konstantin SELESKEROV, Gencheng WU
-
Patent number: 11570307Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: GrantFiled: August 3, 2020Date of Patent: January 31, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Publication number: 20220366153Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: May 12, 2021Publication date: November 17, 2022Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Publication number: 20220138470Abstract: Techniques performed by a data processing system for facilitating an online presentation session include establishing an online presentation session for conducting an online presentation for a first computing device of a presenter and a plurality of second computing devices of a plurality of participants, receiving a set of first media streams comprising presentation content from the first computing device, receiving a set of second media streams from the second computing devices of a first subset of the plurality of participants, the set of second media streams including audio content, video content, or both of first subset of the plurality of participants, analyzing the set of first media streams using one or more first machine learning models n to generate a set of first feedback results, analyzing the set of second media streams using one or more second machine learning models to identify a set of first reactions by the participants to obtain first reaction information, automatically analyzing the set ofType: ApplicationFiled: October 30, 2020Publication date: May 5, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Konstantin SELESKEROV, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Gencheng WU, Brittany Elizabeth MEDEROS
-
Publication number: 20220038580Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: ApplicationFiled: August 3, 2020Publication date: February 3, 2022Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Publication number: 20190339820Abstract: Systems, methods, and software are disclosed herein to predict and display menu items based on a prediction of the next user-actions. In an implementation, a user interface is displayed to the application. The user interface comprises menu items displayed in sub-menus of a menu. In response to an occurrence of a user-action associated with a given item of a given sub-menu of the sub-menus, a set of user-actions likely to occur next is identified based on an identity of the user-action. A subset of the menu items is then identified corresponding to the set of the user-actions likely to occur next. The subset of the menu items is then displayed in the user interface.Type: ApplicationFiled: May 2, 2018Publication date: November 7, 2019Inventors: Gencheng Wu, Lishan Yu