Patents by Inventor Tanya M. Miller

Tanya M. Miller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10599953
    Abstract: Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
    Type: Grant
    Filed: August 27, 2014
    Date of Patent: March 24, 2020
    Assignee: Verint Americas Inc.
    Inventors: Fred A Brown, Tanya M Miller, Charles C Wooters, Megan Brown, Molly Q Brown
  • Publication number: 20200042335
    Abstract: Conversation user interfaces that are configured for virtual assistant interaction may include contextual interface items that are based on contextual information. The contextual information may relate to a current or previous conversation between a user and a virtual assistant and/or may relate to other types of information, such as a location of a user, an orientation of a device, missing information, and so on. The conversation user interfaces may additionally, or alternatively, control an input mode based on contextual information, such as an inferred input mode of a user or a location of a user. Further, the conversation user interfaces may tag conversation items by saving the conversation items to a tray and/or associating the conversation items with indicators.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Inventors: Fred A. Brown, Tanya M. Miller, Richard Morris
  • Publication number: 20190362252
    Abstract: Conversation user interfaces that are configured for virtual assistant interaction may include tasks to be completed that may have repetitious entry of the same or similar information. User preferences may be learned by the system and may be confirmed by the user prior to the learned preference being implemented. Learned preferences may be identified in near real-time on large collections of data for a large population of users. Further, the learned preferences may be based at least in part on previous conversations and actions between the system and the user as well as user-defined occurrence thresholds.
    Type: Application
    Filed: August 13, 2019
    Publication date: November 28, 2019
    Inventors: Tanya M. Miller, Ian Beaver
  • Publication number: 20190355362
    Abstract: Techniques for interacting with a portion of a content item through a virtual assistant are described herein. The techniques may include identifying a portion of a content item that is relevant to user input and causing an action to be performed related to the portion of the content item. The action may include, for example, displaying the portion of the content item on a smart device in a displayable format that is adapted to a display characteristic of the smart device, performing a task for a user that satisfies the user input, and so on.
    Type: Application
    Filed: August 5, 2019
    Publication date: November 21, 2019
    Inventors: Fred A. Brown, Tanya M. Miller
  • Patent number: 10445115
    Abstract: Conversation user interfaces that are configured for virtual assistant interaction may include contextual interface items that are based on contextual information. The contextual information may relate to a current or previous conversation between a user and a virtual assistant and/or may relate to other types of information, such as a location of a user, an orientation of a device, missing information, and so on. The conversation user interfaces may additionally, or alternatively, control an input mode based on contextual information, such as an inferred input mode of a user or a location of a user. Further, the conversation user interfaces may tag conversation items by saving the conversation items to a tray and/or associating the conversation items with indicators.
    Type: Grant
    Filed: April 18, 2013
    Date of Patent: October 15, 2019
    Assignee: Verint Americas Inc.
    Inventors: Fred A Brown, Tanya M Miller, Richard Morris
  • Patent number: 10438610
    Abstract: A virtual assistant may communicate with a user in a natural language that simulates a human. The virtual assistant may be associated with a human-configured knowledge base that simulates human responses. In some instances, a parent response may be provided by the virtual assistant and, thereafter, a child response that is associated with the parent response may be provided.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: October 8, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred Brown, Tanya M Miller, Mark Zartler, Molly Q Brown
  • Patent number: 10417567
    Abstract: Conversation user interfaces that are configured for virtual assistant interaction may include tasks to be completed that may have repetitious entry of the same or similar information. User preferences may be learned by the system and may be confirmed by the user prior to the learned preference being implemented. Learned preferences may be identified in near real-time on large collections of data for a large population of users. Further, the learned preferences may be based at least in part on previous conversations and actions between the system and the user as well as user-defined occurrence thresholds.
    Type: Grant
    Filed: February 14, 2014
    Date of Patent: September 17, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Tanya M. Miller, Ian Beaver
  • Patent number: 10379712
    Abstract: A conversation user interface enables users to better understand their interactions with computing devices, particularly when speech input is involved. The conversation user interface conveys a visual representation of a conversation between the computing device, or virtual assistant thereon, and a user. The conversation user interface presents a series of dialog representations that show input from a user (verbal or otherwise) and responses from the device or virtual assistant. Associated with one or more of the dialog representations are one or more graphical elements to convey assumptions made to interpret the user input and derive an associated response. The conversation user interface enables the user to see the assumptions upon which the response was based, and to optionally change the assumption(s). Upon change of an assumption, the conversation GUI is refreshed to present a modified dialog representation of a new response derived from the altered set of assumptions.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: August 13, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A Brown, Eli D. Snavely, Tanya M Miller, Charles C Wooters, Bryan Michael Culley
  • Patent number: 10373616
    Abstract: Techniques for interacting with a portion of a content item through a virtual assistant are described herein. The techniques may include identifying a portion of a content item that is relevant to user input and causing an action to be performed related to the portion of the content item. The action may include, for example, displaying the portion of the content item on a smart device in a displayable format that is adapted to a display characteristic of the smart device, performing a task for a user that satisfies the user input, and so on.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: August 6, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20190139564
    Abstract: Various embodiments provide a tool, referred to herein as “Active Lab” that can be used to develop, debug, and maintain knowledge bases. These knowledge bases (KBs) can then engage various applications, technology, and communications protocols for the purpose of task automation, real time alerting, system integration, knowledge acquisition, and various forms of peer influence. In at least some embodiments, a KB is used as a virtual assistant that any real person can interact with using their own natural language. The KB can then respond and react however the user wants: answering questions, activating applications, or responding to actions on a web page.
    Type: Application
    Filed: January 7, 2019
    Publication date: May 9, 2019
    Inventors: Tanya M. Miller, Fred Brown, Mark Zartler, Molly Q. Brown
  • Publication number: 20190102064
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Application
    Filed: October 1, 2018
    Publication date: April 4, 2019
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20190057298
    Abstract: Techniques for mapping actions and objects to tasks may include identifying a task to be performed by a virtual assistant for an action and/or object. The task may be identified based on a task map of the virtual assistant. In some examples, the task may be identified based on contextual information of a user, such as a conversation history, content output history, user preferences, and so on. The techniques may also include customizing a task map for a particular context, such as a particular user, industry, platform, device type, and so on. The customization may include assigning an action, object, and/or variable value to a particular task.
    Type: Application
    Filed: August 20, 2018
    Publication date: February 21, 2019
    Inventors: Fred A. Brown, Tanya M. Miller, Megan Brown, Verlie Thompson
  • Patent number: 10176827
    Abstract: Various embodiments provide a tool, referred to herein as “Active Lab” that can be used to develop, debug, and maintain knowledge bases. These knowledge bases (KBs) can then engage various applications, technology, and communications protocols for the purpose of task automation, real time alerting, system integration, knowledge acquisition, and various forms of peer influence. In at least some embodiments, a KB is used as a virtual assistant that any real person can interact with using their own natural language. The KB can then respond and react however the user wants: answering questions, activating applications, or responding to actions on a web page.
    Type: Grant
    Filed: January 15, 2008
    Date of Patent: January 8, 2019
    Assignee: VERINT AMERICAS INC.
    Inventor: Tanya M. Miller
  • Patent number: 10109297
    Abstract: A virtual assistant may communicate with a user in a conversational manner based on context. For instances, a virtual assistant may be presented to a user to enable a conversation between the virtual assistant and the user. A response to user input that is received during the conversation may be determined based on contextual values related to the conversation or system that implements the virtual assistant.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: October 23, 2018
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred Brown, Tanya M Miller, Mark Zartler, Scott Buzan
  • Patent number: 10088972
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: October 2, 2018
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A Brown, Tanya M Miller
  • Patent number: 10055681
    Abstract: Techniques for mapping actions and objects to tasks may include identifying a task to be performed by a virtual assistant for an action and/or object. The task may be identified based on a task map of the virtual assistant. In some examples, the task may be identified based on contextual information of a user, such as a conversation history, content output history, user preferences, and so on. The techniques may also include customizing a task map for a particular context, such as a particular user, industry, platform, device type, and so on. The customization may include assigning an action, object, and/or variable value to a particular task.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: August 21, 2018
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A Brown, Tanya M Miller, Megan Brown, Verlie Thompson
  • Publication number: 20180095565
    Abstract: Virtual assistants intelligently emulate a representative of a service provider by providing variable responses to user queries received via the virtual assistants. These variable responses may take the context of a user's query into account both when identifying an intent of a user's query and when identifying an appropriate response to the user's query.
    Type: Application
    Filed: November 30, 2017
    Publication date: April 5, 2018
    Inventors: Fred A. Brown, Tanya M. Miller, Mark Zartler
  • Patent number: 9836177
    Abstract: Virtual assistants intelligently emulate a representative of a service provider by providing variable responses to user queries received via the virtual assistants. These variable responses may take the context of a user's query into account both when identifying an intent of a user's query and when identifying an appropriate response to the user's query.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: December 5, 2017
    Assignee: Next IT Innovation Labs, LLC
    Inventors: Fred A Brown, Tanya M Miller, Mark Zartler
  • Patent number: 9830044
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: November 28, 2017
    Assignee: Next IT Corporation
    Inventors: Fred A Brown, Tanya M Miller
  • Patent number: 9823811
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: November 21, 2017
    Assignee: Next IT Corporation
    Inventors: Fred A Brown, Tanya M Miller