Patents by Inventor Cory Albright

Cory Albright has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11210830
    Abstract: A system and method automatically associates an image and passage from a text. In the system and method, a user can choose or supply an image and the system and/or method will choose a limited selection of relevant word passages for the image from a relatively large volume of potential passages. The system and method utilize a computer system wherein a concept generator and a passage generator processes the content of the image so as to assign words to describe the content, then weight the descriptive words (tags) and assign passages based on the tags and weighting. The passages can be filtered so as to remove inappropriate passages.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: December 28, 2021
    Assignee: LIFE COVENANT CHURCH, INC.
    Inventors: Robert L. Gruenewald, Terry D. Storch, Matthew Sanders, Cory Albright, Scott Bouma
  • Publication number: 20210081056
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 18, 2021
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10884503
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: January 5, 2021
    Assignee: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10706873
    Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: July 7, 2020
    Assignee: SRI International
    Inventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
  • Publication number: 20200111244
    Abstract: A system and method automatically associates an image and passage from a text. In the system and method, a user can choose or supply an image and the system and/or method will choose a limited selection of relevant word passages for the image from a relatively large volume of potential passages. The system and method utilize a computer system wherein a concept generator and a passage generator processes the content of the image so as to assign words to describe the content, then weight the descriptive words (tags) and assign passages based on the tags and weighting. The passages can be filtered so as to remove inappropriate passages.
    Type: Application
    Filed: August 28, 2019
    Publication date: April 9, 2020
    Inventors: Robert L. Gruenewald, Terry D. Storch, Matthew Sanders, Cory Albright, Scott Bouma
  • Publication number: 20170160813
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: October 24, 2016
    Publication date: June 8, 2017
    Applicant: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Publication number: 20170084295
    Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.
    Type: Application
    Filed: June 10, 2016
    Publication date: March 23, 2017
    Inventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
  • Publication number: 20060106783
    Abstract: A system or method consistent with an embodiment of the present invention is useful in analyzing large volumes of different types of data, such as textual data, numeric data, categorical data, or sequential string data, for use in identifying relationships among the data types or different operations that have been performed on the data. A system or method consistent with the present invention determines and displays the relative content and context of related information and is operative to aid in identifying relationships among disparate data types. Various data types, such as numerical data, protein and DNA sequence data, categorical information, and textual information, such as annotations associated with the numerical data or research papers may be correlated for visual analysis. A variety of user-selectable views may be correlated for user interaction to identify relationships that exist among the different types of data or various operations performed on the data.
    Type: Application
    Filed: November 21, 2005
    Publication date: May 18, 2006
    Inventors: Jeffrey Saffer, Augustin Calapristi, Nancy Miller, Randell Scarberry, Sarah Thurston, Susan Havre, Scott Decker, Deborah Payne, Heidi Sofia, Gregory Thomas, Lisa Stillwell, Guang Chen, Vernon Crow, Cory Albright, Sean Zabriskie, Kevin Groch, Joel Malard, Lucille Nowell
  • Publication number: 20060093222
    Abstract: A system or method consistent with an embodiment of the present invention is useful in analyzing large volumes of different types of data, such as textual data, numeric data, categorical data, or sequential string data, for use in identifying relationships among the data types or different operations that have been performed on the data. A system or method consistent with the present invention determines and displays the relative content and context of related information and is operative to aid in identifying relationships among disparate data types. Various data types, such as numerical data, protein and DNA sequence data, categorical information, and textual information, such as annotations associated with the numerical data or research papers may be correlated for visual analysis. A variety of user-selectable views may be correlated for user interaction to identify relationships that exist among the different types of data or various operations performed on the data.
    Type: Application
    Filed: November 21, 2005
    Publication date: May 4, 2006
    Inventors: Jeffrey Saffer, Gregory Thomas, Lisa Stillwell, Guang Chen, Vernon Crow, Cory Albright, Sean Zabriskie, Kevin Groch, Joel Malard, Lucille Nowell, Augustin Calapristi, Nancy Miller, Randall Scarberry, Sara Thurston, Susan Havre, Scott Decker, Deborah Payne, Heidi Sofia