Patents by Inventor Fethiye Asli Celikyilmaz

Fethiye Asli Celikyilmaz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230334313
    Abstract: Systems and methods are disclosed for inquiry-based deep learning. In one implementation, a first content segment is selected from a body of content. The content segment includes a first content element. The first content segment is compared to a second content segment to identify a content element present in the first content segment that is not present in the second content segment. Based on an identification of the content element present in the first content segment that is not present in the second content segment, the content element is stored in a session memory. A first question is generated based on the first content segment. The session memory is processed to compute an answer to the first question. An action is initiated based on the answer. Using deep learning, content segments can be encoded into memory. Incremental questioning can serve to focus various deep learning operations on certain content segments.
    Type: Application
    Filed: June 21, 2023
    Publication date: October 19, 2023
    Inventors: Fethiye Asli CELIKYILMAZ, Li Deng, Lihong Li, Chong Wang
  • Patent number: 11715000
    Abstract: Systems and methods are disclosed for inquiry-based deep learning. In one implementation, a first content segment is selected from a body of content. The content segment includes a first content element. The first content segment is compared to a second content segment to identify a content element present in the first content segment that is not present in the second content segment. Based on an identification of the content element present in the first content segment that is not present in the second content segment, the content element is stored in a session memory. A first question is generated based on the first content segment. The session memory is processed to compute an answer to the first question. An action is initiated based on the answer. Using deep learning, content segments can be encoded into memory. Incremental questioning can serve to focus various deep learning operations on certain content segments.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: August 1, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Fethiye Asli Celikyilmaz, Li Deng, Lihong Li, Chong Wang
  • Patent number: 10901500
    Abstract: Improving accuracy in understanding and/or resolving references to visual elements in a visual context associated with a computerized conversational system is described. Techniques described herein leverage gaze input with gestures and/or speech input to improve spoken language understanding in computerized conversational systems. Leveraging gaze input and speech input improves spoken language understanding in conversational systems by improving the accuracy by which the system can resolve references—or interpret a user's intent—with respect to visual elements in a visual context. In at least one example, the techniques herein describe tracking gaze to generate gaze input, recognizing speech input, and extracting gaze features and lexical features from the user input. Based at least in part on the gaze features and lexical features, user utterances directed to visual elements in a visual context can be resolved.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: January 26, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anna Prokofieva, Fethiye Asli Celikyilmaz, Dilek Z Hakkani-Tur, Larry Heck, Malcolm Slaney
  • Publication number: 20190391640
    Abstract: Improving accuracy in understanding and/or resolving references to visual elements in a visual context associated with a computerized conversational system is described. Techniques described herein leverage gaze input with gestures and/or speech input to improve spoken language understanding in computerized conversational systems. Leveraging gaze input and speech input improves spoken language understanding in conversational systems by improving the accuracy by which the system can resolve references—or interpret a user's intent—with respect to visual elements in a visual context. In at least one example, the techniques herein describe tracking gaze to generate gaze input, recognizing speech input, and extracting gaze features and lexical features from the user input. Based at least in part on the gaze features and lexical features, user utterances directed to visual elements in a visual context can be resolved.
    Type: Application
    Filed: April 30, 2019
    Publication date: December 26, 2019
    Inventors: Anna Prokofieva, Fethiye Asli Celikyilmaz, Dilek Z Hakkani-Tur, Larry Heck, Malcom Slaney
  • Publication number: 20190287012
    Abstract: An encoder-decoder neural network for sequence-to-sequence mapping tasks, such as, e.g., abstractive summarization, may employ multiple communicating encoder agents to encode multiple respective input sequences that collectively constitute the overall input. The outputs of the encoder agents may be fed into the decoder, which may use an associated attention mechanism to select which encoder agent to pay attention to at each decoding time step. Additional features and embodiments are disclosed.
    Type: Application
    Filed: March 16, 2018
    Publication date: September 19, 2019
    Inventors: Fethiye Asli Celikyilmaz, Xiaodong He
  • Patent number: 10317992
    Abstract: Improving accuracy in understanding and/or resolving references to visual elements in a visual context associated with a computerized conversational system is described. Techniques described herein leverage gaze input with gestures and/or speech input to improve spoken language understanding in computerized conversational systems. Leveraging gaze input and speech input improves spoken language understanding in conversational systems by improving the accuracy by which the system can resolve references—or interpret a user's intent—with respect to visual elements in a visual context. In at least one example, the techniques herein describe tracking gaze to generate gaze input, recognizing speech input, and extracting gaze features and lexical features from the user input. Based at least in part on the gaze features and lexical features, user utterances directed to visual elements in a visual context can be resolved.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: June 11, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anna Prokofieva, Fethiye Asli Celikyilmaz, Dilek Z Hakkani-Tur, Larry Heck, Malcolm Slaney
  • Publication number: 20190005385
    Abstract: Systems and methods are disclosed for inquiry-based deep learning. In one implementation, a first content segment is selected from a body of content. The content segment includes a first content element. The first content segment is compared to a second content segment to identify a content element present in the first content segment that is not present in the second content segment. Based on an identification of the content element present in the first content segment that is not present in the second content segment, the content element is stored in a session memory. A first question is generated based on the first content segment. The session memory is processed to compute an answer to the first question. An action is initiated based on the answer. Using deep learning, content segments can be encoded into memory. Incremental questioning can serve to focus various deep learning operations on certain content segments.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Fethiye Asli Celikyilmaz, Li Deng, Lihong Li, Chong Wang
  • Patent number: 9916301
    Abstract: Click logs are automatically mined to assist in discovering candidate variations for named entities. The named entities may be obtained from one or more sources and include an initial list of named entities. A search may be performed within one or more search engines to determine common phrases that are used to identify the named entity in addition to the named entity initially included in the named entity list. Click logs associated with results of past searches are automatically mined to discover what phrases determined from the searches are candidate variations for the named entity. The candidate variations are scored to assist in determining the variations to include within an understanding model. The variations may also be used when delivering responses and displayed output in the SLU system. For example, instead of using the listed named entity, a popular and/or shortened name may be used by the system.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: March 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dustin Hillard, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tur, Rukmini Iyer, Gokhan Tur
  • Patent number: 9886958
    Abstract: A universal model-based approach for item disambiguation and selection is provided. An utterance may be received by a computing device in response to a list of items for selection. In aspects, the list of items may be displayed on a display screen. The universal disambiguation model may then be applied to the utterance. The universal disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of items and identify an item from the list corresponding to the utterance, based on identified language and/or domain independent referential features. The computing device may then perform an action which may include selecting the identified item associated with utterance.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Dilek Hakkani-Tur, Ruhi Sarikaya
  • Patent number: 9875237
    Abstract: An understanding model is trained to account for human perception of the perceived relative importance of different tagged items (e.g. slot/intent/domain). Instead of treating each tagged item as equally important, human perception is used to adjust the training of the understanding model by associating a perceived weight with each of the different predicted items. The relative perceptual importance of the different items may be modeled using different methods (e.g. as a simple weight vector, a model trained using features (lexical, knowledge, slot type, . . . ), and the like). The perceptual weight vector and/or or model are incorporated into the understanding model training process where items that are perceptually more important are weighted more heavily as compared to the items that are determined by human perception as less important.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: January 23, 2018
    Assignee: MICROSFOT TECHNOLOGY LICENSING, LLC
    Inventors: Ruhi Sarikaya, Anoop Deoras, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi
  • Patent number: 9870356
    Abstract: Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: January 16, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dilek Hakkani-Tür, Fethiye Asli Celikyilmaz, Larry P. Heck, Gokhan Tur, Yangfeng Ji
  • Publication number: 20170169829
    Abstract: A universal model-based approach for item disambiguation and selection is provided. An utterance may be received by a computing device in response to a list of items for selection. In aspects, the list of items may be displayed on a display screen. The universal disambiguation model may then be applied to the utterance. The universal disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of items and identify an item from the list corresponding to the utterance, based on identified language and/or domain independent referential features. The computing device may then perform an action which may include selecting the identified item associated with utterance.
    Type: Application
    Filed: December 11, 2015
    Publication date: June 15, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Dilek Hakkani-Tur, Ruhi Sarikaya
  • Publication number: 20170116182
    Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
    Type: Application
    Filed: December 20, 2016
    Publication date: April 27, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
  • Patent number: 9558176
    Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: January 31, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
  • Patent number: 9412363
    Abstract: A model-based approach for on-screen item selection and disambiguation is provided. An utterance may be received by a computing device in response to a display of a list of items for selection on a display screen. A disambiguation model may then be applied to the utterance. The disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of displayed items, extract referential features from the utterance and identify an item from the list corresponding to the utterance, based on the extracted referential features. The computing device may then perform an action which includes selecting the identified item associated with utterance.
    Type: Grant
    Filed: March 3, 2014
    Date of Patent: August 9, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Larry Paul Heck, Dilek Z. Hakkani-Tur
  • Publication number: 20160091967
    Abstract: Improving accuracy in understanding and/or resolving references to visual elements in a visual context associated with a computerized conversational system is described. Techniques described herein leverage gaze input with gestures and/or speech input to improve spoken language understanding in computerized conversational systems. Leveraging gaze input and speech input improves spoken language understanding in conversational systems by improving the accuracy by which the system can resolve references—or interpret a user's intent—with respect to visual elements in a visual context. In at least one example, the techniques herein describe tracking gaze to generate gaze input, recognizing speech input, and extracting gaze features and lexical features from the user input. Based at least in part on the gaze features and lexical features, user utterances directed to visual elements in a visual context can be resolved.
    Type: Application
    Filed: September 25, 2014
    Publication date: March 31, 2016
    Inventors: Anna Prokofieva, Fethiye Asli Celikyilmaz, Dilek Z. Hakkani-Tur, Larry Heck, Malcolm Slaney
  • Patent number: 9292492
    Abstract: A scalable statistical language understanding (SLU) system uses a fixed number of understanding models that scale across domains and intents (i.e. single vs. multiple intents per utterance). For each domain added to the SLU system, the fixed number of existing models is updated to reflect the newly added domain. Information that is already included in the existing models and the corresponding training data may be re-used. The fixed models may include a domain detector model, an intent action detector model, an intent object detector model and a slot/entity tagging model. A domain detector identifies different domains identified within an utterance. All/portion of the detected domains are used to determine associated intent actions. For each determined intent action, one or more intent objects are identified. Slot/entity tagging is performed using the determined domains, intent actions, and intent object detector.
    Type: Grant
    Filed: February 4, 2013
    Date of Patent: March 22, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Anoop Deoras, Fethiye Asli Celikyilmaz, Ravikiran Janardhana, Daniel Boies
  • Publication number: 20150248886
    Abstract: A model-based approach for on-screen item selection and disambiguation is provided. An utterance may be received by a computing device in response to a display of a list of items for selection on a display screen. A disambiguation model may then be applied to the utterance. The disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of displayed items, extract referential features from the utterance and identify an item from the list corresponding to the utterance, based on the extracted referential features. The computing device may then perform an action which includes selecting the identified item associated with utterance.
    Type: Application
    Filed: March 3, 2014
    Publication date: September 3, 2015
    Applicant: Microsoft Corporation
    Inventors: Ruhi Sarikaya, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Larry Paul Heck, Dilek Z. Hakkani-Tur
  • Publication number: 20150227845
    Abstract: Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.
    Type: Application
    Filed: February 13, 2014
    Publication date: August 13, 2015
    Applicant: Microsoft Corporation
    Inventors: Dilek Hakkani-Tür, Fethiye Asli Celikyilmaz, Larry P. Heck, Gokhan Tur, Yangfeng Ji
  • Patent number: 9098494
    Abstract: Processes capable of accepting linguistic input in one or more languages are generated by re-using existing linguistic components associated with a different anchor language, together with machine translation components that translate between the anchor language and the one or more languages. Linguistic input is directed to machine translation components that translate such input from its language into the anchor language. Those existing linguistic components are then utilized to initiate responsive processing and generate output. Optionally, the output is directed through the machine translation components. A language identifier can initially receive linguistic input and identify the language within which such linguistic input is provided to select an appropriate machine translation component.
    Type: Grant
    Filed: May 10, 2012
    Date of Patent: August 4, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Daniel Boies, Fethiye Asli Celikyilmaz, Anoop K. Deoras, Dustin Rigg Hillard, Dilek Z. Hakkani-Tur, Gokhan Tur, Fileno A. Alleva