Patents by Inventor Fred Brown

Fred Brown has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210019353
    Abstract: Various embodiments are described for searching and retrieving documents based on a natural language input. A computer-implemented natural language processor electronically receives a natural language input phrase from an interface device. The natural language processor attributes a concept to the phrase with the natural language processor. The natural language processor searches a database for a set of documents to identify one or more documents associated with the attributed concept to be included in a response to the natural language input phrase. The natural language processor maintains the concepts during an interactive session with the natural language processor. The natural language processor resolves ambiguous input patterns in the natural language input phrase with the natural language processor. The natural language processor includes a processor, a memory and/or storage component, and an input/output device.
    Type: Application
    Filed: October 5, 2020
    Publication date: January 21, 2021
    Inventors: Fred Brown, Mark Zartler, Tanya M. Miller
  • Patent number: 10890192
    Abstract: A fan comprising: (a) an impeller, (b) a motor connected to the impeller so that during operation the motor drives the impeller to move a fluid; (c) an instrumentation assembly that includes one or more components for controlling one or more operations of the fan; (d) an interface in communication with the instrumentation assembly; wherein the interface is configured so that one or more programs, one or more signals, or both can be input into the fan after a final assembly of the fan so that the fan can be configured to perform a desired function at the time of installation into a final product, reconfigured for a different purpose than originally programmed, or both.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: January 12, 2021
    Assignee: MDR ARCOM LLC
    Inventors: Fred A. Brown, Kenneth Hoffman
  • Publication number: 20200401955
    Abstract: Systems and methods are disclosed which allow for changes to be made to an existing order using an automated system. A user may interact with an automated agent using any communication modality that generally emulates an interaction with a human customer service representative. Should the interaction exceed a predefined level of complexity, or meet other criteria, the user may be routed to a human customer service representative.
    Type: Application
    Filed: July 2, 2020
    Publication date: December 24, 2020
    Inventors: Fred A. Brown, Linda Lou Beloberzycky
  • Patent number: 10809876
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: October 20, 2020
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A. Brown, Tanya M. Miller
  • Patent number: 10795944
    Abstract: User intent may be derived from a previous communication. For example, a text string for user input may be obtained. The text string may include a pronoun. Information from a communication received prior to receipt of the user input may be derived. The information may identify an individual. User intent may be derived from the text string and the information. This may include determining that the pronoun refers to the individual.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: October 6, 2020
    Assignee: Verint Americas Inc.
    Inventors: Fred Brown, Mark Zartler, Tanya M. Miller
  • Publication number: 20200184276
    Abstract: Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Inventors: Fred A. Brown, Tanya M. Miller, Charles C. Wooters, Megan Brown, Molly Q. Brown
  • Publication number: 20200184275
    Abstract: Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Inventors: Fred A. Brown, Tanya M. Miller, Charles C. Wooters, Megan Brown, Molly Q. Brown
  • Publication number: 20200143481
    Abstract: Techniques and architectures for providing notifications regarding events, such as hurricanes, tornados, fires, floods, earthquakes, and so on, are discussed herein. For example, a user interface may be displayed with a map of a geographical area, an event visual representation representing an event, and an impact visual representation indicating an impact area where the event is estimated to impact. A request may be received to notify users associated with the impact area and customized notifications may be sent to users associated with the impact area. The customized notifications may be based on policy data for the users.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 7, 2020
    Inventors: Fred A. Brown, Jason D. Evans, Ann M. Foreyt, Andrew J. Mccall
  • Patent number: 10599953
    Abstract: Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
    Type: Grant
    Filed: August 27, 2014
    Date of Patent: March 24, 2020
    Assignee: Verint Americas Inc.
    Inventors: Fred A Brown, Tanya M Miller, Charles C Wooters, Megan Brown, Molly Q Brown
  • Publication number: 20200042335
    Abstract: Conversation user interfaces that are configured for virtual assistant interaction may include contextual interface items that are based on contextual information. The contextual information may relate to a current or previous conversation between a user and a virtual assistant and/or may relate to other types of information, such as a location of a user, an orientation of a device, missing information, and so on. The conversation user interfaces may additionally, or alternatively, control an input mode based on contextual information, such as an inferred input mode of a user or a location of a user. Further, the conversation user interfaces may tag conversation items by saving the conversation items to a tray and/or associating the conversation items with indicators.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Inventors: Fred A. Brown, Tanya M. Miller, Richard Morris
  • Patent number: 10545648
    Abstract: This disclosure describes techniques and architectures for evaluating conversations. In some instances, conversations with users, virtual assistants, and others may be analyzed to identify potential risks within a language model that is employed by the virtual assistants and other entities. The potential risks may be evaluated by administrators, users, systems, and others to identify potential issues with the language model that need to be addressed. This may allow the language model to be improved and enhance user experience with the virtual assistants and others that employ the language model.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: January 28, 2020
    Assignee: Verint Americas Inc.
    Inventors: Ian Beaver, Fred Brown, Casey Gossard
  • Publication number: 20190355362
    Abstract: Techniques for interacting with a portion of a content item through a virtual assistant are described herein. The techniques may include identifying a portion of a content item that is relevant to user input and causing an action to be performed related to the portion of the content item. The action may include, for example, displaying the portion of the content item on a smart device in a displayable format that is adapted to a display characteristic of the smart device, performing a task for a user that satisfies the user input, and so on.
    Type: Application
    Filed: August 5, 2019
    Publication date: November 21, 2019
    Inventors: Fred A. Brown, Tanya M. Miller
  • Patent number: 10445115
    Abstract: Conversation user interfaces that are configured for virtual assistant interaction may include contextual interface items that are based on contextual information. The contextual information may relate to a current or previous conversation between a user and a virtual assistant and/or may relate to other types of information, such as a location of a user, an orientation of a device, missing information, and so on. The conversation user interfaces may additionally, or alternatively, control an input mode based on contextual information, such as an inferred input mode of a user or a location of a user. Further, the conversation user interfaces may tag conversation items by saving the conversation items to a tray and/or associating the conversation items with indicators.
    Type: Grant
    Filed: April 18, 2013
    Date of Patent: October 15, 2019
    Assignee: Verint Americas Inc.
    Inventors: Fred A Brown, Tanya M Miller, Richard Morris
  • Patent number: 10438610
    Abstract: A virtual assistant may communicate with a user in a natural language that simulates a human. The virtual assistant may be associated with a human-configured knowledge base that simulates human responses. In some instances, a parent response may be provided by the virtual assistant and, thereafter, a child response that is associated with the parent response may be provided.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: October 8, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred Brown, Tanya M Miller, Mark Zartler, Molly Q Brown
  • Patent number: 10379712
    Abstract: A conversation user interface enables users to better understand their interactions with computing devices, particularly when speech input is involved. The conversation user interface conveys a visual representation of a conversation between the computing device, or virtual assistant thereon, and a user. The conversation user interface presents a series of dialog representations that show input from a user (verbal or otherwise) and responses from the device or virtual assistant. Associated with one or more of the dialog representations are one or more graphical elements to convey assumptions made to interpret the user input and derive an associated response. The conversation user interface enables the user to see the assumptions upon which the response was based, and to optionally change the assumption(s). Upon change of an assumption, the conversation GUI is refreshed to present a modified dialog representation of a new response derived from the altered set of assumptions.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: August 13, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A Brown, Eli D. Snavely, Tanya M Miller, Charles C Wooters, Bryan Michael Culley
  • Patent number: 10373616
    Abstract: Techniques for interacting with a portion of a content item through a virtual assistant are described herein. The techniques may include identifying a portion of a content item that is relevant to user input and causing an action to be performed related to the portion of the content item. The action may include, for example, displaying the portion of the content item on a smart device in a displayable format that is adapted to a display characteristic of the smart device, performing a task for a user that satisfies the user input, and so on.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: August 6, 2019
    Assignee: VERINT AMERICAS INC.
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20190138190
    Abstract: This disclosure describes techniques and architectures for evaluating conversations. In some instances, conversations with users, virtual assistants, and others may be analyzed to identify potential risks within a language model that is employed by the virtual assistants and other entities. The potential risks may be evaluated by administrators, users, systems, and others to identify potential issues with the language model that need to be addressed. This may allow the language model to be improved and enhance user experience with the virtual assistants and others that employ the language model.
    Type: Application
    Filed: January 8, 2019
    Publication date: May 9, 2019
    Applicant: Verint Americas Inc.
    Inventors: Ian Beaver, Fred Brown, Casey Gossard
  • Publication number: 20190139564
    Abstract: Various embodiments provide a tool, referred to herein as “Active Lab” that can be used to develop, debug, and maintain knowledge bases. These knowledge bases (KBs) can then engage various applications, technology, and communications protocols for the purpose of task automation, real time alerting, system integration, knowledge acquisition, and various forms of peer influence. In at least some embodiments, a KB is used as a virtual assistant that any real person can interact with using their own natural language. The KB can then respond and react however the user wants: answering questions, activating applications, or responding to actions on a web page.
    Type: Application
    Filed: January 7, 2019
    Publication date: May 9, 2019
    Inventors: Tanya M. Miller, Fred Brown, Mark Zartler, Molly Q. Brown
  • Publication number: 20190102064
    Abstract: Techniques and architectures for implementing a team of virtual assistants are described herein. The team may include multiple virtual assistants that are configured with different characteristics, such as different functionality, base language models, levels of training, visual appearances, personalities, and so on. The characteristics of the virtual assistants may be configured by trainers, end-users, and/or a virtual assistant service. The virtual assistants may be presented to end-users in conversation user interfaces to perform different tasks for the users in a conversational manner. The different virtual assistants may adapt to different contexts. The virtual assistants may additionally, or alternatively, interact with each other to carry out tasks for the users, which may be illustrated in conversation user interfaces.
    Type: Application
    Filed: October 1, 2018
    Publication date: April 4, 2019
    Inventors: Fred A. Brown, Tanya M. Miller
  • Publication number: 20190057298
    Abstract: Techniques for mapping actions and objects to tasks may include identifying a task to be performed by a virtual assistant for an action and/or object. The task may be identified based on a task map of the virtual assistant. In some examples, the task may be identified based on contextual information of a user, such as a conversation history, content output history, user preferences, and so on. The techniques may also include customizing a task map for a particular context, such as a particular user, industry, platform, device type, and so on. The customization may include assigning an action, object, and/or variable value to a particular task.
    Type: Application
    Filed: August 20, 2018
    Publication date: February 21, 2019
    Inventors: Fred A. Brown, Tanya M. Miller, Megan Brown, Verlie Thompson