Patents by Inventor Derya Ozkan

Derya Ozkan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11189288
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: November 30, 2021
    Assignee: Nuance Communications, Inc.
    Inventors: Michael Johnston, Derya Ozkan
  • Publication number: 20200150921
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Application
    Filed: January 15, 2020
    Publication date: May 14, 2020
    Inventors: Michael Johnston, Derya Ozkan
  • Patent number: 10540140
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: January 21, 2020
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Michael Johnston, Derya Ozkan
  • Patent number: 9996109
    Abstract: In one example, a method includes determining, by a processor (104) of a wearable computing device (102) and based on motion data generated by a motion sensor (106) of the wearable computing device, one or more strokes. In this example, the method also includes generating, by the processor and based on the motion data, a respective attribute vector for each respective stroke from the one or more strokes and classifying, by the processor and based on the respective attribute vector, each respective stroke from the one or more strokes into at least one category. In this example, the method also includes determining, by the processor and based on a gesture library and the at least one category for each stroke from the one or more strokes, a gesture. In this example, the method also includes performing, by the wearable device and based on the gesture, an action.
    Type: Grant
    Filed: August 14, 2015
    Date of Patent: June 12, 2018
    Assignee: Google LLC
    Inventors: Rodrigo Carceroni, Derya Ozkan, Suril Shah, Pannag Raghunath Sanketi
  • Publication number: 20180101240
    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
    Type: Application
    Filed: October 13, 2017
    Publication date: April 12, 2018
    Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
  • Publication number: 20180004482
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 4, 2018
    Inventors: Michael JOHNSTON, Derya OZKAN
  • Patent number: 9804679
    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
    Type: Grant
    Filed: July 3, 2015
    Date of Patent: October 31, 2017
    Assignee: Google Inc.
    Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
  • Patent number: 9710223
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: July 18, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: Michael Johnston, Derya Ozkan
  • Publication number: 20170003747
    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
    Type: Application
    Filed: July 3, 2015
    Publication date: January 5, 2017
    Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
  • Publication number: 20160048161
    Abstract: In one example, a method includes determining, by a processor (104) of a wearable computing device (102) and based on motion data generated by a motion sensor (106) of the wearable computing device, one or more strokes. In this example, the method also includes generating, by the processor and based on the motion data, a respective attribute vector for each respective stroke from the one or more strokes and classifying, by the processor and based on the respective attribute vector, each respective stroke from the one or more strokes into at least one category. In this example, the method also includes determining, by the processor and based on a gesture library and the at least one category for each stroke from the one or more strokes, a gesture. In this example, the method also includes performing, by the wearable device and based on the gesture, an action.
    Type: Application
    Filed: August 14, 2015
    Publication date: February 18, 2016
    Inventors: Rodrigo Carceroni, Derya Ozkan, Suril Shah, Pannag Raghunath Sanketi
  • Publication number: 20160026434
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Application
    Filed: October 5, 2015
    Publication date: January 28, 2016
    Inventors: Michael JOHNSTON, Derya OZKAN
  • Patent number: 9152376
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: October 6, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Michael Johnston, Derya Ozkan
  • Publication number: 20130144629
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 6, 2013
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Michael JOHNSTON, Derya OZKAN
  • Publication number: 20030162232
    Abstract: The invention relates to a component comprising a plurality of fiber elements and sample molecules of a selected sample molecule species or of selected sample molecule species groups, said sample molecules being immobilized on said fiber elements, whereby a specific sample molecule species or sample molecule species group is assigned to each fiber element. The invention is characterized in that the sample molecules are immobilized on outer surfaces of the fiber elements, and in that a supporting element fixes the fiber elements in an interspaced manner and in a radial direction with regard to the fiber elements or the fiber elements are bundled together with linear contact.
    Type: Application
    Filed: March 3, 2003
    Publication date: August 28, 2003
    Inventor: Derya Ozkan