Patents by Inventor GIRISH ACHARYA
GIRISH ACHARYA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190108841Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.Type: ApplicationFiled: June 3, 2017Publication date: April 11, 2019Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
-
Patent number: 10163440Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.Type: GrantFiled: January 5, 2017Date of Patent: December 25, 2018Assignee: SRI InternationalInventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
-
Publication number: 20180314689Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.Type: ApplicationFiled: June 21, 2018Publication date: November 1, 2018Applicant: SRI InternationalInventors: Wen Wang, Dimitra Vergyri, Girish Acharya
-
Publication number: 20180139377Abstract: Device logic in a mobile device configures a processor to capture a series of images, such as a video, using a consumer-grade camera, and to analyze the images to determine the best-focused image, of the series of images, that captures a region of interest. The images may be of a textured surface, such as facial skin of a mobile device user. The processor sets a focal length of the camera to a fixed position for collecting the images. The processor may guide the user to position the mobile device for capturing the images, using audible cues. For each image, the processor crops the image to the region of interest, extracts luminance information, and determines one or more energy levels of the luminance via a Laplacian pyramid. The energy levels may be filtered, and then are compared to energy levels of the other images to determine the best-focused image.Type: ApplicationFiled: January 19, 2016Publication date: May 17, 2018Inventors: David Chao ZHANG, John Benjamin SOUTHALL, Michael Anthony ISNARDI, Michael Raymond PIACENTINO, David Christopher BERENDS, Girish ACHARYA, Douglas A. BERCOW, Aaron SPAULDING, Sek CHAI
-
Publication number: 20170160813Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: ApplicationFiled: October 24, 2016Publication date: June 8, 2017Applicant: SRI InternationalInventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Publication number: 20170116989Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.Type: ApplicationFiled: January 5, 2017Publication date: April 27, 2017Applicant: SRI InternationalInventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
-
Patent number: 9575964Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.Type: GrantFiled: June 30, 2015Date of Patent: February 21, 2017Assignee: SRI INTERNATIONALInventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
-
Publication number: 20160378861Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.Type: ApplicationFiled: October 8, 2015Publication date: December 29, 2016Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
-
Publication number: 20150302003Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.Type: ApplicationFiled: June 30, 2015Publication date: October 22, 2015Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
-
Patent number: 9082402Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.Type: GrantFiled: December 8, 2011Date of Patent: July 14, 2015Assignee: SRI InternationalInventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
-
Publication number: 20140310595Abstract: A computing system for virtual personal assistance includes technologies to, among other things, correlate an external representation of an object with a real world view of the object, display virtual elements on the external representation of the object and/or display virtual elements on the real world view of the object, to provide virtual personal assistance in a multi-step activity or another activity that involves the observation or handling of an object and a reference document.Type: ApplicationFiled: June 24, 2014Publication date: October 16, 2014Inventors: Girish Acharya, Rakesh Kumar, Supun Samarasekara, Louise Yarnall, Michael John Wolverton, Zhiwei Zhu, Ryan Villamil, Vlad Braznoi, James F. Carpenter, Glenn A. Murray
-
Publication number: 20140176603Abstract: A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.Type: ApplicationFiled: December 20, 2012Publication date: June 26, 2014Applicant: SRI INTERNATIONALInventors: RAKESH KUMAR, SUPUN SAMARASEKERA, GIRISH ACHARYA, MICHAEL JOHN WOLVERTON, NECIP FAZIL AYAN, ZHIWEI ZHU, RYAN VILLAMIL