Patents by Inventor GIRISH ACHARYA

GIRISH ACHARYA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190108841
    Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.
    Type: Application
    Filed: June 3, 2017
    Publication date: April 11, 2019
    Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
  • Patent number: 10163440
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: December 25, 2018
    Assignee: SRI International
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Publication number: 20180314689
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Application
    Filed: June 21, 2018
    Publication date: November 1, 2018
    Applicant: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Publication number: 20180139377
    Abstract: Device logic in a mobile device configures a processor to capture a series of images, such as a video, using a consumer-grade camera, and to analyze the images to determine the best-focused image, of the series of images, that captures a region of interest. The images may be of a textured surface, such as facial skin of a mobile device user. The processor sets a focal length of the camera to a fixed position for collecting the images. The processor may guide the user to position the mobile device for capturing the images, using audible cues. For each image, the processor crops the image to the region of interest, extracts luminance information, and determines one or more energy levels of the luminance via a Laplacian pyramid. The energy levels may be filtered, and then are compared to energy levels of the other images to determine the best-focused image.
    Type: Application
    Filed: January 19, 2016
    Publication date: May 17, 2018
    Inventors: David Chao ZHANG, John Benjamin SOUTHALL, Michael Anthony ISNARDI, Michael Raymond PIACENTINO, David Christopher BERENDS, Girish ACHARYA, Douglas A. BERCOW, Aaron SPAULDING, Sek CHAI
  • Publication number: 20170160813
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: October 24, 2016
    Publication date: June 8, 2017
    Applicant: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Publication number: 20170116989
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Application
    Filed: January 5, 2017
    Publication date: April 27, 2017
    Applicant: SRI International
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Patent number: 9575964
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: February 21, 2017
    Assignee: SRI INTERNATIONAL
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Publication number: 20160378861
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Application
    Filed: October 8, 2015
    Publication date: December 29, 2016
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Publication number: 20150302003
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Application
    Filed: June 30, 2015
    Publication date: October 22, 2015
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Patent number: 9082402
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: July 14, 2015
    Assignee: SRI International
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Publication number: 20140310595
    Abstract: A computing system for virtual personal assistance includes technologies to, among other things, correlate an external representation of an object with a real world view of the object, display virtual elements on the external representation of the object and/or display virtual elements on the real world view of the object, to provide virtual personal assistance in a multi-step activity or another activity that involves the observation or handling of an object and a reference document.
    Type: Application
    Filed: June 24, 2014
    Publication date: October 16, 2014
    Inventors: Girish Acharya, Rakesh Kumar, Supun Samarasekara, Louise Yarnall, Michael John Wolverton, Zhiwei Zhu, Ryan Villamil, Vlad Braznoi, James F. Carpenter, Glenn A. Murray
  • Publication number: 20140176603
    Abstract: A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.
    Type: Application
    Filed: December 20, 2012
    Publication date: June 26, 2014
    Applicant: SRI INTERNATIONAL
    Inventors: RAKESH KUMAR, SUPUN SAMARASEKERA, GIRISH ACHARYA, MICHAEL JOHN WOLVERTON, NECIP FAZIL AYAN, ZHIWEI ZHU, RYAN VILLAMIL