Patents by Inventor Konstantin Seleskerov
Konstantin Seleskerov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240320451Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: June 6, 2024Publication date: September 26, 2024Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Patent number: 12032922Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: GrantFiled: May 12, 2021Date of Patent: July 9, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Konstantin Seleskerov, Huey-Ru Tsai, Muin Barkatali Momin, Ramya Tridandapani, Sindhu Vigasini Jambunathan, Amit Srivastava, Derek Martin Johnson, Gencheng Wu, Sheng Zhao, Xinfeng Chen, Bohan Li
-
Patent number: 12026948Abstract: Techniques performed by a data processing system include establishing an online presentation session for conducting an online presentation, receiving first media streams comprising presentation content from the first computing device, receiving second media streams from the second computing devices of a subset of the plurality of participants, the second media streams including audio content, video content, or both of the subset of the plurality of participants, analyzing the first media streams using first machine learning models to generate feedback results, analyzing the set of second media streams to identify first reactions by the participants to obtain reaction information, automatically analyzing the feedback results and the reactions to identify discrepancies between the feedback results and the reactions, and automatically updating one or more parameters of the machine learning models based on the discrepancies to improve the suggestions for improving the online presentation.Type: GrantFiled: October 30, 2020Date of Patent: July 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Konstantin Seleskerov, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Gencheng Wu, Brittany Elizabeth Mederos
-
Patent number: 11909922Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: GrantFiled: January 18, 2023Date of Patent: February 20, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Publication number: 20230156124Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Inventors: Ji LI, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Konstantin SELESKEROV, Gencheng WU
-
Patent number: 11570307Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: GrantFiled: August 3, 2020Date of Patent: January 31, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Publication number: 20220366153Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: May 12, 2021Publication date: November 17, 2022Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Publication number: 20220138470Abstract: Techniques performed by a data processing system for facilitating an online presentation session include establishing an online presentation session for conducting an online presentation for a first computing device of a presenter and a plurality of second computing devices of a plurality of participants, receiving a set of first media streams comprising presentation content from the first computing device, receiving a set of second media streams from the second computing devices of a first subset of the plurality of participants, the set of second media streams including audio content, video content, or both of first subset of the plurality of participants, analyzing the set of first media streams using one or more first machine learning models n to generate a set of first feedback results, analyzing the set of second media streams using one or more second machine learning models to identify a set of first reactions by the participants to obtain first reaction information, automatically analyzing the set ofType: ApplicationFiled: October 30, 2020Publication date: May 5, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Konstantin SELESKEROV, Amit SRIVASTAVA, Derek Martin JOHNSON, Priyanka Vikram SINHA, Gencheng WU, Brittany Elizabeth MEDEROS
-
Publication number: 20220038580Abstract: The present disclosure relates to processing operations configured to provide processing that automatically analyzes acoustic signals from attendees of a live presentation and automatically triggers corresponding reaction indications from results of analysis thereof. Exemplary reaction indications provide feedback for live presentations that can be presented in real-time (or near real-time) without requiring a user to manually take action to provide any feedback. As a non-limiting example, reaction indications may be presented in a form that is easy to visualize and understand such as emojis or icons. Another example of a reaction indication is a graphical user interface (GUI) notification that provides a predictive indication of user intent derived from analysis of acoustic signals.Type: ApplicationFiled: August 3, 2020Publication date: February 3, 2022Inventors: Ji Li, Amit Srivastava, Derek Martin Johnson, Priyanka Vikram Sinha, Konstantin Seleskerov, Gencheng Wu
-
Patent number: 10394916Abstract: Technologies are described to provide a personalized search environment to users without requiring enterprise environment access. Upon access of a personal service account such as one in a productivity service, a user's personal environment may be created by an aggregation service using graph based data infrastructure. Sources of information may include personal email accounts, calendars, social/professional networks, task list applications, online data storage services, health applications, gaming applications, and communication applications associated with the user. A personalized search application may then use the data from the aggregation service and/or (if available) user's enterprise account information to perform personalized searches with relevant results for the user.Type: GrantFiled: September 13, 2016Date of Patent: August 27, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Alan Tus, Konstantin Seleskerov, Panos Sakkos, Rovin Bhandari
-
Publication number: 20180075148Abstract: Technologies are described to provide a personalized search environment to users without requiring enterprise environment access. Upon access of a personal service account such as one in a productivity service, a user's personal environment may be created by an aggregation service using graph based data infrastructure. Sources of information may include personal email accounts, calendars, social/professional networks, task list applications, online data storage services, health applications, gaming applications, and communication applications associated with the user. A personalized search application may then use the data from the aggregation service and/or (if available) user's enterprise account information to perform personalized searches with relevant results for the user.Type: ApplicationFiled: September 13, 2016Publication date: March 15, 2018Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alan Tus, Konstantin Seleskerov, Panos Sakkos, Rovin Bhandari
-
Publication number: 20170068693Abstract: Exposing content to individuals in an enterprise is provided. An external content sharing system includes a graph server comprising an API engine, a graph index, and an activity processing and analytics engine. When a user selects to share external content via a user agent, the API engine receives an API call including a URL of the external content from the user agent. The API engine accesses the content, extracts metadata, and stores the metadata as a node in the graph index, where connections are made between the node and individuals who are socially close to the user. The API engine receives a query request for content associated with an individual socially close to the user, queries the graph index, and provides a result including the metadata of the external content and the URL for generating and exposing a visual information element representative of the external content to the individual.Type: ApplicationFiled: September 4, 2015Publication date: March 9, 2017Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC.Inventors: Azmil Macksood, Aleksei Nikolaevich Triastcyn, Konstantin Seleskerov, Vidar Tveoy Knudsen, Panos Sakkos