Patents by Inventor Cory Albright
Cory Albright has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11210830Abstract: A system and method automatically associates an image and passage from a text. In the system and method, a user can choose or supply an image and the system and/or method will choose a limited selection of relevant word passages for the image from a relatively large volume of potential passages. The system and method utilize a computer system wherein a concept generator and a passage generator processes the content of the image so as to assign words to describe the content, then weight the descriptive words (tags) and assign passages based on the tags and weighting. The passages can be filtered so as to remove inappropriate passages.Type: GrantFiled: August 28, 2019Date of Patent: December 28, 2021Assignee: LIFE COVENANT CHURCH, INC.Inventors: Robert L. Gruenewald, Terry D. Storch, Matthew Sanders, Cory Albright, Scott Bouma
-
Publication number: 20210081056Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: ApplicationFiled: December 1, 2020Publication date: March 18, 2021Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Patent number: 10884503Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: GrantFiled: October 24, 2016Date of Patent: January 5, 2021Assignee: SRI InternationalInventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Patent number: 10706873Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.Type: GrantFiled: June 10, 2016Date of Patent: July 7, 2020Assignee: SRI InternationalInventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
-
Publication number: 20200111244Abstract: A system and method automatically associates an image and passage from a text. In the system and method, a user can choose or supply an image and the system and/or method will choose a limited selection of relevant word passages for the image from a relatively large volume of potential passages. The system and method utilize a computer system wherein a concept generator and a passage generator processes the content of the image so as to assign words to describe the content, then weight the descriptive words (tags) and assign passages based on the tags and weighting. The passages can be filtered so as to remove inappropriate passages.Type: ApplicationFiled: August 28, 2019Publication date: April 9, 2020Inventors: Robert L. Gruenewald, Terry D. Storch, Matthew Sanders, Cory Albright, Scott Bouma
-
Publication number: 20170160813Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: ApplicationFiled: October 24, 2016Publication date: June 8, 2017Applicant: SRI InternationalInventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Publication number: 20170084295Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.Type: ApplicationFiled: June 10, 2016Publication date: March 23, 2017Inventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
-
Publication number: 20060106783Abstract: A system or method consistent with an embodiment of the present invention is useful in analyzing large volumes of different types of data, such as textual data, numeric data, categorical data, or sequential string data, for use in identifying relationships among the data types or different operations that have been performed on the data. A system or method consistent with the present invention determines and displays the relative content and context of related information and is operative to aid in identifying relationships among disparate data types. Various data types, such as numerical data, protein and DNA sequence data, categorical information, and textual information, such as annotations associated with the numerical data or research papers may be correlated for visual analysis. A variety of user-selectable views may be correlated for user interaction to identify relationships that exist among the different types of data or various operations performed on the data.Type: ApplicationFiled: November 21, 2005Publication date: May 18, 2006Inventors: Jeffrey Saffer, Augustin Calapristi, Nancy Miller, Randell Scarberry, Sarah Thurston, Susan Havre, Scott Decker, Deborah Payne, Heidi Sofia, Gregory Thomas, Lisa Stillwell, Guang Chen, Vernon Crow, Cory Albright, Sean Zabriskie, Kevin Groch, Joel Malard, Lucille Nowell
-
Publication number: 20060093222Abstract: A system or method consistent with an embodiment of the present invention is useful in analyzing large volumes of different types of data, such as textual data, numeric data, categorical data, or sequential string data, for use in identifying relationships among the data types or different operations that have been performed on the data. A system or method consistent with the present invention determines and displays the relative content and context of related information and is operative to aid in identifying relationships among disparate data types. Various data types, such as numerical data, protein and DNA sequence data, categorical information, and textual information, such as annotations associated with the numerical data or research papers may be correlated for visual analysis. A variety of user-selectable views may be correlated for user interaction to identify relationships that exist among the different types of data or various operations performed on the data.Type: ApplicationFiled: November 21, 2005Publication date: May 4, 2006Inventors: Jeffrey Saffer, Gregory Thomas, Lisa Stillwell, Guang Chen, Vernon Crow, Cory Albright, Sean Zabriskie, Kevin Groch, Joel Malard, Lucille Nowell, Augustin Calapristi, Nancy Miller, Randall Scarberry, Sara Thurston, Susan Havre, Scott Decker, Deborah Payne, Heidi Sofia