Patents by Inventor Derya Ozkan
Derya Ozkan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11189288Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: GrantFiled: January 15, 2020Date of Patent: November 30, 2021Assignee: Nuance Communications, Inc.Inventors: Michael Johnston, Derya Ozkan
-
Publication number: 20200150921Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: ApplicationFiled: January 15, 2020Publication date: May 14, 2020Inventors: Michael Johnston, Derya Ozkan
-
Patent number: 10540140Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: GrantFiled: July 17, 2017Date of Patent: January 21, 2020Assignee: NUANCE COMMUNICATIONS, INC.Inventors: Michael Johnston, Derya Ozkan
-
Patent number: 9996109Abstract: In one example, a method includes determining, by a processor (104) of a wearable computing device (102) and based on motion data generated by a motion sensor (106) of the wearable computing device, one or more strokes. In this example, the method also includes generating, by the processor and based on the motion data, a respective attribute vector for each respective stroke from the one or more strokes and classifying, by the processor and based on the respective attribute vector, each respective stroke from the one or more strokes into at least one category. In this example, the method also includes determining, by the processor and based on a gesture library and the at least one category for each stroke from the one or more strokes, a gesture. In this example, the method also includes performing, by the wearable device and based on the gesture, an action.Type: GrantFiled: August 14, 2015Date of Patent: June 12, 2018Assignee: Google LLCInventors: Rodrigo Carceroni, Derya Ozkan, Suril Shah, Pannag Raghunath Sanketi
-
Publication number: 20180101240Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).Type: ApplicationFiled: October 13, 2017Publication date: April 12, 2018Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
-
Publication number: 20180004482Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: ApplicationFiled: July 17, 2017Publication date: January 4, 2018Inventors: Michael JOHNSTON, Derya OZKAN
-
Patent number: 9804679Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).Type: GrantFiled: July 3, 2015Date of Patent: October 31, 2017Assignee: Google Inc.Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
-
Patent number: 9710223Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: GrantFiled: October 5, 2015Date of Patent: July 18, 2017Assignee: Nuance Communications, Inc.Inventors: Michael Johnston, Derya Ozkan
-
Publication number: 20170003747Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).Type: ApplicationFiled: July 3, 2015Publication date: January 5, 2017Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
-
Publication number: 20160048161Abstract: In one example, a method includes determining, by a processor (104) of a wearable computing device (102) and based on motion data generated by a motion sensor (106) of the wearable computing device, one or more strokes. In this example, the method also includes generating, by the processor and based on the motion data, a respective attribute vector for each respective stroke from the one or more strokes and classifying, by the processor and based on the respective attribute vector, each respective stroke from the one or more strokes into at least one category. In this example, the method also includes determining, by the processor and based on a gesture library and the at least one category for each stroke from the one or more strokes, a gesture. In this example, the method also includes performing, by the wearable device and based on the gesture, an action.Type: ApplicationFiled: August 14, 2015Publication date: February 18, 2016Inventors: Rodrigo Carceroni, Derya Ozkan, Suril Shah, Pannag Raghunath Sanketi
-
Publication number: 20160026434Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: ApplicationFiled: October 5, 2015Publication date: January 28, 2016Inventors: Michael JOHNSTON, Derya OZKAN
-
Patent number: 9152376Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: GrantFiled: December 1, 2011Date of Patent: October 6, 2015Assignee: AT&T Intellectual Property I, L.P.Inventors: Michael Johnston, Derya Ozkan
-
Publication number: 20130144629Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.Type: ApplicationFiled: December 1, 2011Publication date: June 6, 2013Applicant: AT&T Intellectual Property I, L.P.Inventors: Michael JOHNSTON, Derya OZKAN
-
Publication number: 20030162232Abstract: The invention relates to a component comprising a plurality of fiber elements and sample molecules of a selected sample molecule species or of selected sample molecule species groups, said sample molecules being immobilized on said fiber elements, whereby a specific sample molecule species or sample molecule species group is assigned to each fiber element. The invention is characterized in that the sample molecules are immobilized on outer surfaces of the fiber elements, and in that a supporting element fixes the fiber elements in an interspaced manner and in a radial direction with regard to the fiber elements or the fiber elements are bundled together with linear contact.Type: ApplicationFiled: March 3, 2003Publication date: August 28, 2003Inventor: Derya Ozkan