Patents by Inventor Dongwook Yoon
Dongwook Yoon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11785194Abstract: The disclosed techniques provide a computing device that displays video content within a comment section of a user interface. When users invoke a video display by selecting a link within a user interface comment section, a system can control a navigational position of a user interface to concurrently display the video and selected comments within the comment section. In one illustrative example, a system can display a user interface having a video display area and a comment section. The user interface may be positioned to show the comment section within a viewing area of a display device, and such a position may place the video display area outside of the viewing area. In such a scenario, when a system receives a user input indicating a selection of a comment displayed within the comment section, the system can generate a rendering of the video content for display within the comment section.Type: GrantFiled: April 19, 2019Date of Patent: October 10, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Dongwook Yoon, Sol Sidney Fels, Matin Yarmand
-
Patent number: 11678031Abstract: A user interface (UI) includes a video display area for presenting video content, a text entry field, and a comment section. The UI also provides UI controls for enabling a user to select a portion of the video content and for generating a typed hyperlink in the text entry field that references the selected portion of the video content. A UI control for creating a new comment in the comment section of the UI that includes the typed hyperlink from the text entry field is also provided. A user can select a typed link in a comment and, in response thereto, the content type for the referenced portion of the video content can be determined based on data in the selected link. A preview of the portion of the video content can then be presented based upon the determined content type.Type: GrantFiled: April 19, 2019Date of Patent: June 13, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Dongwook Yoon, Sol Sidney Fels, Matin Yarmand
-
Patent number: 11026000Abstract: A user interface (UI) includes a video display area for presenting video content, a text entry field, and a comment section. The UI also provides UI controls for enabling a user to select a portion of the video content and for generating a typed hyperlink in the text entry field that references the selected portion of the video content. A UI control for creating a new comment in the comment section of the UI that includes the typed hyperlink from the text entry field is also provided. A user can select a typed link in a comment and, in response thereto, the content type for the referenced portion of the video content can be determined based on data in the selected link. A preview of the portion of the video content can then be presented based upon the determined content type.Type: GrantFiled: April 19, 2019Date of Patent: June 1, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sol Sidney Fels, Dongwook Yoon, Matin Yarmand
-
Patent number: 10904631Abstract: The present disclosure provides a computing device that performs an auto-completion process that generates and inserts text of spoken content of a video into a text entry field. By providing quoted content in a text input field, a system can mitigate the need for users to perform the tedious process of listening to spoken content of a video and manually entering the spoken content into a computing device. In some configurations, a system can receive one or more keywords from a user input and identify spoken content containing the keywords. The system can provide text of the spoken content based on a level of relevancy and populate one or more input fields with the text of the spoken content. The generation of auto completion text from spoken content of a video can enhance user interaction with the computer and maximize productivity and engagement with a video-based system.Type: GrantFiled: April 19, 2019Date of Patent: January 26, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sol Sidney Fels, Dongwook Yoon, Matin Yarmand
-
Publication number: 20200336718Abstract: The disclosed techniques provide a computing device that displays video content within a comment section of a user interface. When users invoke a video display by selecting a link within a user interface comment section, a system can control a navigational position of a user interface to concurrently display the video and selected comments within the comment section. In one illustrative example, a system can display a user interface having a video display area and a comment section. The user interface may be positioned to show the comment section within a viewing area of a display device, and such a position may place the video display area outside of the viewing area. In such a scenario, when a system receives a user input indicating a selection of a comment displayed within the comment section, the system can generate a rendering of the video content for display within the comment section.Type: ApplicationFiled: April 19, 2019Publication date: October 22, 2020Inventors: Dongwook YOON, Sol Sidney FELS, Matin YARMAND
-
Publication number: 20200336805Abstract: A user interface (UI) includes a video display area for presenting video content, a text entry field, and a comment section. The UI also provides UI controls for enabling a user to select a portion of the video content and for generating a typed hyperlink in the text entry field that references the selected portion of the video content. A UI control for creating a new comment in the comment section of the UI that includes the typed hyperlink from the text entry field is also provided. A user can select a typed link in a comment and, in response thereto, the content type for the referenced portion of the video content can be determined based on data in the selected link. A preview of the portion of the video content can then be presented based upon the determined content type.Type: ApplicationFiled: April 19, 2019Publication date: October 22, 2020Inventors: Dongwook YOON, Sol Sidney FELS, Matin YARMAND
-
Publication number: 20200336806Abstract: A user interface (UI) includes a video display area for presenting video content, a text entry field, and a comment section. The UI also provides UI controls for enabling a user to select a portion of the video content and for generating a typed hyperlink in the text entry field that references the selected portion of the video content. A UI control for creating a new comment in the comment section of the UI that includes the typed hyperlink from the text entry field is also provided. A user can select a typed link in a comment and, in response thereto, the content type for the referenced portion of the video content can be determined based on data in the selected link. A preview of the portion of the video content can then be presented based upon the determined content type.Type: ApplicationFiled: April 19, 2019Publication date: October 22, 2020Inventors: Sol Sidney FELS, Dongwook YOON, Matin YARMAND
-
Publication number: 20200336794Abstract: The present disclosure provides a computing device that performs an auto-completion process that generates and inserts text of spoken content of a video into a text entry field. By providing quoted content in a text input field, a system can mitigate the need for users to perform the tedious process of listening to spoken content of a video and manually entering the spoken content into a computing device. In some configurations, a system can receive one or more keywords from a user input and identify spoken content containing the keywords. The system can provide text of the spoken content based on a level of relevancy and populate one or more input fields with the text of the spoken content. The generation of auto completion text from spoken content of a video can enhance user interaction with the computer and maximize productivity and engagement with a video-based system.Type: ApplicationFiled: April 19, 2019Publication date: October 22, 2020Inventors: Sol Sidney FELS, Dongwook YOON, Matin YARMAND
-
Publication number: 20170115782Abstract: By correlating user grip information with micro-mobility events, electronic devices can provide support for a broad range of interactions and contextually-dependent techniques. Such correlation allows electronic devices to better identify device usage contexts, and in turn provide a more responsive and helpful user experience, especially in the context of reading and task performance. To allow for accurate and efficient device usage context identification, a model may be used to make device usage context determinations based on the correlated gesture and micro-mobility data. Once a context, device usage context, or gesture is identified, an action can be taken on one or more electronic devices.Type: ApplicationFiled: October 23, 2015Publication date: April 27, 2017Inventors: Kenneth P. Hinckley, Hrvoje Benko, Michel Pahud, Dongwook Yoon
-
Patent number: 9552345Abstract: Gestural annotation is described, for example where sensors such as touch screens and/or cameras monitor document annotation events made by a user of a document reading and/or writing application. In various examples the document annotation events comprise gestures recognized from the sensor data by a gesture recognition component. For example, the gestures may be in-air gestures or touch screen gestures. In examples, a compressed record of the sensor data is computed using at least the recognized gestures, document state and timestamps. In some examples the compressed record of the sensor data is used to facilitate consumption of the annotation events in relation to the document by a second user. In some examples the sensor data comprises touch sensor data representing electronic ink; and in some examples the sensor data comprises audio data capturing speech of a user.Type: GrantFiled: February 28, 2014Date of Patent: January 24, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Nicholas Yen-Cherng Chen, Abigail Jane Sellen, Dongwook Yoon
-
Publication number: 20150248388Abstract: Gestural annotation is described, for example where sensors such as touch screens and/or cameras monitor document annotation events made by a user of a document reading and/or writing application. In various examples the document annotation events comprise gestures recognized from the sensor data by a gesture recognition component. For example, the gestures may be in-air gestures or touch screen gestures. In examples, a compressed record of the sensor data is computed using at least the recognized gestures, document state and timestamps. In some examples the compressed record of the sensor data is used to facilitate consumption of the annotation events in relation to the document by a second user. In some examples the sensor data comprises touch sensor data representing electronic ink; and in some examples the sensor data comprises audio data capturing speech of a user.Type: ApplicationFiled: February 28, 2014Publication date: September 3, 2015Applicant: Microsoft CorporationInventors: Nicholas Yen-Cherng Chen, Abigail Jane Sellen, Dongwook Yoon
-
Patent number: 8406497Abstract: A new method for the identification of body landmarks from three-dimensional (3D) human body scans without human intervention is provided. The method is based on a population in whom landmarks were identified and from whom 3D geometries were obtained. An unmarked body (subject) is landmarked if there is a landmarked body in the population whose geometry is similar to that of the subject. The similarity between the surface geometry of the subject and that of each individual in the population can be determined. A search is performed using the mesh registration technique to find a part-mesh with the least registration error; the landmarks of the best-matched result are then used for the subject.Type: GrantFiled: October 7, 2009Date of Patent: March 26, 2013Inventors: Dongwook Yoon, Hyeong-Seok Ko
-
Publication number: 20100215235Abstract: A new method for the identification of body landmarks from three-dimensional (3D) human body scans without human intervention is provided. The method is based on a population in whom landmarks were identified and from whom 3D geometries were obtained. An unmarked body (subject) is landmarked if there is a landmarked body in the population whose geometry is similar to that of the subject. The similarity between the surface geometry of the subject and that of each individual in the population can be determined. A search is performed using the mesh registration technique to find a part-mesh with the least registration error; the landmarks of the best-matched result are then used for the subject.Type: ApplicationFiled: October 7, 2009Publication date: August 26, 2010Inventors: Dongwook YOON, Hyeong-Seok Ko