Patents by Inventor Tanya M. Miller

Tanya M. Miller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170270927
    Abstract: Techniques for interacting with a portion of a content item through a virtual assistant are described herein. The techniques may include identifying a portion of a content item that is relevant to user input and causing an action to be performed related to the portion of the content item. The action may include, for example, displaying the portion of the content item on a smart device in a displayable format that is adapted to a display characteristic of the smart device, performing a task for a user that satisfies the user input, and so on.
    Type: Application
    Filed: June 2, 2017
    Publication date: September 21, 2017
    Inventors: Fred A. Brown, Tanya M. Miller
  • Patent number: 9672822
    Abstract: Techniques for interacting with a portion of a content item through a virtual assistant are described herein. The techniques may include identifying a portion of a content item that is relevant to user input and causing an action to be performed related to the portion of the content item. The action may include, for example, displaying the portion of the content item on a smart device in a displayable format that is adapted to a display characteristic of the smart device, performing a task for a user that satisfies the user input, and so on.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: June 6, 2017
    Assignee: Next IT Corporation
    Inventors: Fred A Brown, Tanya M Miller
  • Publication number: 20170132220
    Abstract: User intent may be derived from a previous communication. For example, a text string for user input may be obtained. The text string may include a pronoun. Information from a communication received prior to receipt of the user input may be derived. The information may identify an individual. User intent may be derived from the text string and the information. This may include determining that the pronoun refers to the individual.
    Type: Application
    Filed: January 23, 2017
    Publication date: May 11, 2017
    Inventors: Fred Brown, Mark Zartler, Tanya M. Miller
  • Patent number: 9589579
    Abstract: Various embodiments provide a tool, referred to herein as “Active Lab” that can be used to develop, debug, and maintain knowledge bases. These knowledge bases (KBs) can then engage various applications, technology, and communications protocols for the purpose of task automation, real time alerting, system integration, knowledge acquisition, and various forms of peer influence. In at least some embodiments, a KB is used as a virtual assistant that any real person can interact with using their own natural language. The KB can then respond and react however the user wants: answering questions, activating applications, or responding to actions on a web page.
    Type: Grant
    Filed: June 11, 2014
    Date of Patent: March 7, 2017
    Assignee: Next IT Corporation
    Inventors: Fred Brown, Mark Zartler, Tanya M Miller, Scott Buzan
  • Patent number: 9563618
    Abstract: Virtual agents may be implemented on a wearable device. The wearable device may include an input device to receive input and a communication component to send the input to a computing device for processing and to receive a response for the input. The wearable device may also include an output device to output the response via the virtual agent as part of a conversation with a user.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: February 7, 2017
    Assignee: Next IT Corporation
    Inventors: Fred Brown, Tanya M Miller, Mark Zartler
  • Patent number: 9552350
    Abstract: Ambiguous input of a user received during an interactive session with a virtual agent may be processed. The virtual agent may be presented via a computing device to facilitate the interactive session with the user. The user may provide the ambiguous input, which is processed to determine a response to the input. The virtual agent may provide the response to the user. The virtual agent may also carry out a goal-based dialogue where a goal to be accomplished is identified. The virtual agent may prompt the user for information related to the goal.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: January 24, 2017
    Assignee: Next IT Corporation
    Inventors: Fred Brown, Mark Zartler, Tanya M Miller
  • Publication number: 20160110071
    Abstract: A conversation user interface enables users to better understand their interactions with computing devices, particularly when speech input is involved. The conversation user interface conveys a visual representation of a conversation between the computing device, or virtual assistant thereon, and a user. The conversation user interface presents a series of dialog representations that show input from a user (verbal or otherwise) and responses from the device or virtual assistant. Associated with one or more of the dialog representations are one or more graphical elements to convey assumptions made to interpret the user input and derive an associated response. The conversation user interface enables the user to see the assumptions upon which the response was based, and to optionally change the assumption(s). Upon change of an assumption, the conversation GUI is refreshed to present a modified dialog representation of a new response derived from the altered set of assumptions.
    Type: Application
    Filed: December 28, 2015
    Publication date: April 21, 2016
    Inventors: Fred A. Brown, Eli D. Snavely, Tanya M. Miller, Charles C. Wooters, Bryan Michael Culley
  • Publication number: 20160063097
    Abstract: Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
    Type: Application
    Filed: August 27, 2014
    Publication date: March 3, 2016
    Inventors: Fred A Brown, Tanya M Miller, Charles C Wooters, Megan Brown, Molly Q Brown
  • Patent number: 9223537
    Abstract: A conversation user interface enables users to better understand their interactions with computing devices, particularly when speech input is involved. The conversation user interface conveys a visual representation of a conversation between the computing device, or virtual assistant thereon, and a user. The conversation user interface presents a series of dialog representations that show input from a user (verbal or otherwise) and responses from the device or virtual assistant. Associated with one or more of the dialog representations are one or more graphical elements to convey assumptions made to interpret the user input and derive an associated response. The conversation user interface enables the user to see the assumptions upon which the response was based, and to optionally change the assumption(s). Upon change of an assumption, the conversation GUI is refreshed to present a modified dialog representation of a new response derived from the altered set of assumptions.
    Type: Grant
    Filed: April 18, 2012
    Date of Patent: December 29, 2015
    Assignee: Next IT Corporation
    Inventors: Fred A Brown, Tanya M Miller, Charles C Wooters, Bryan Michael Culley, Eli D. Snavely
  • Patent number: 9183285
    Abstract: Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
    Type: Grant
    Filed: August 27, 2014
    Date of Patent: November 10, 2015
    Assignee: Next IT Corporation
    Inventors: Fred A Brown, Tanya M Miller, Charles C Wooters, Megan Brown, Molly Q Brown
  • Publication number: 20150186155
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Application
    Filed: June 2, 2014
    Publication date: July 2, 2015
    Applicant: Next IT Corporation
    Inventors: Fred A. Brown, Tanya M. Miller, Megan Brown
  • Publication number: 20150186154
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Application
    Filed: June 2, 2014
    Publication date: July 2, 2015
    Applicant: Next IT Corporation
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20150185996
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Application
    Filed: June 2, 2014
    Publication date: July 2, 2015
    Applicant: Next IT Corporation
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20150186156
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Application
    Filed: June 2, 2014
    Publication date: July 2, 2015
    Applicant: Next IT Corporation
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20150121216
    Abstract: Techniques for mapping actions and objects to tasks may include identifying a task to be performed by a virtual assistant for an action and/or object. The task may be identified based on a task map of the virtual assistant. In some examples, the task may be identified based on contextual information of a user, such as a conversation history, content output history, user preferences, and so on. The techniques may also include customizing a task map for a particular context, such as a particular user, industry, platform, device type, and so on. The customization may include assigning an action, object, and/or variable value to a particular task.
    Type: Application
    Filed: October 31, 2013
    Publication date: April 30, 2015
    Applicant: Next IT Corporation
    Inventors: Fred A. Brown, Tanya M. Miller, Megan Brown, Verlie Thompson
  • Patent number: 8943094
    Abstract: Various embodiments are described for searching and retrieving documents based on a natural language input. A computer-implemented natural language processor electronically receives a natural language input phrase from an interface device. The natural language processor attributes a concept to the phrase with the natural language processor. The natural language processor searches a database for a set of documents to identify one or more documents associated with the attributed concept to be included in a response to the natural language input phrase. The natural language processor maintains the concepts during an interactive session with the natural language processor. The natural language processor resolves ambiguous input patterns in the natural language input phrase with the natural language processor. The natural language processor includes a processor, a memory and/or storage component, and an input/output device.
    Type: Grant
    Filed: September 22, 2009
    Date of Patent: January 27, 2015
    Assignee: Next IT Corporation
    Inventors: Fred Brown, Mark Zartler, Tanya M Miller
  • Publication number: 20140365407
    Abstract: A virtual assistant may communicate with a user in a conversational manner based on context. For instances, a virtual assistant may be presented to a user to enable a conversation between the virtual assistant and the user. A response to user input that is received during the conversation may be determined based on contextual values related to the conversation or system that implements the virtual assistant.
    Type: Application
    Filed: August 25, 2014
    Publication date: December 11, 2014
    Inventors: Fred Brown, Tanya M. Miller, Mark Zartler, Scott Buzan
  • Publication number: 20140365223
    Abstract: A virtual assistant may communicate with a user in a natural language that simulates a human. The virtual assistant may be associated with a human-configured knowledge base that simulates human responses. In some instances, a parent response may be provided by the virtual assistant and, thereafter, a child response that is associated with the parent response may be provided.
    Type: Application
    Filed: August 25, 2014
    Publication date: December 11, 2014
    Inventors: Fred Brown, Tanya M. Miller, Mark Zartler, Molly Q. Brown
  • Publication number: 20140343924
    Abstract: Various embodiments provide a tool, referred to herein as “Active Lab” that can be used to develop, debug, and maintain knowledge bases. These knowledge bases (KBs) can then engage various applications, technology, and communications protocols for the purpose of task automation, real time alerting, system integration, knowledge acquisition, and various forms of peer influence. In at least some embodiments, a KB is used as a virtual assistant that any real person can interact with using their own natural language. The KB can then respond and react however the user wants: answering questions, activating applications, or responding to actions on a web page.
    Type: Application
    Filed: June 11, 2014
    Publication date: November 20, 2014
    Inventors: Fred Brown, Mark Zartler, Tanya M Miller, Scott Buzan
  • Publication number: 20140343928
    Abstract: Virtual agents may be implemented on a wearable device. The wearable device may include an input device to receive input and a communication component to send the input to a computing device for processing and to receive a response for the input. The wearable device may also include an output device to output the response via the virtual agent as part of a conversation with a user.
    Type: Application
    Filed: August 4, 2014
    Publication date: November 20, 2014
    Inventors: Fred Brown, Tanya M. Miller, Mark Zartler