Patents by Inventor GIRISH ACHARYA

GIRISH ACHARYA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220310079
    Abstract: A conversational assistant for conversational engagement platform can contain various modules including a user-model augmentation module, a dialogue management module, and a user-state analysis input/output module. The dialogue management module receives metrics tied to a user from the other modules to understand a current topic and a user's emotions regarding the current topic from the user-state analysis input/output module and then adapts dialogue from the dialogue management module to the user based on dialogue rules factoring in these different metrics. The dialogue rules also factors in both i) a duration of a conversational engagement with the user and ii) an attempt to maintain a positive experience for the user with the conversational engagement. A flexible ontology relationship representation about the user is built and stores learned metrics about the user over time with each conversational engagement, and then in combination with the dialogue rules, drives the conversations with the user.
    Type: Application
    Filed: June 15, 2020
    Publication date: September 29, 2022
    Inventors: Edgar T. Kalns, Dimitra Vergyi, Girish Acharya, Andreas Kathol, Leonor Almada, Hyong-Gyun Kim, Nikoletta Baslou, Michael Wessel, Aaron Spaulding, Roland Heusser, James F. Carpenter, Min Yin
  • Patent number: 11397462
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: July 26, 2022
    Assignee: SRI International
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Patent number: 11279279
    Abstract: An evaluation engine has two or more modules to assist a driver of a vehicle. A driver drowsiness module analyzes monitored features of the driver to recognize two or more levels of drowsiness of the driver of the vehicle. The driver drowsiness module evaluates drowsiness of the driver based on observed body language and facial analysis of the driver. The driver drowsiness module is configured to analyze live multi-modal sensor inputs from sensors against at least one of i) a trained artificial intelligence model and ii) a rules based model while the driver is driving the vehicle to produce an output comprising a driver drowsiness-level estimation. A driver assistance module provides one or more positive assistance mechanisms to the driver to return the driver to be at or above the designated level of drowsiness.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: March 22, 2022
    Assignees: SRI International, Toyota Motor Corporation
    Inventors: Amir Tamrakar, Girish Acharya, Makoto Okabe, John James Byrnes
  • Publication number: 20210192972
    Abstract: This disclosure describes machine learning techniques for capturing human knowledge for performing a task. In one example, a video device obtains video data of a first user performing the task and one or more sensors generate sensor data during performance of the task. An audio device obtains audio data describing performance of the task. A computation engine applies a machine learning system to correlate the video data to the audio data and sensor data to identify portions of the video, sensor, and audio data that depict a same step of a plurality of steps for performing the task. The machine learning system further processes the correlated data to update a domain model defining performance of the task. A training unit applies the domain model to generate training information for performing the task. An output device outputs the training information for use in training a second user to perform the task.
    Type: Application
    Filed: December 21, 2020
    Publication date: June 24, 2021
    Inventors: Girish Acharya, Louise Yarnall, Anirban Roy, Michael Wessel, Yi Yao, John J. Byrnes, Dayne Freitag, Zachary Weiler, Paul Kalmar
  • Publication number: 20210129748
    Abstract: An evaluation engine has two or more modules to assist a driver of a vehicle. A driver drowsiness module analyzes monitored features of the driver to recognize two or more levels of drowsiness of the driver of the vehicle. The driver drowsiness module evaluates drowsiness of the driver based on observed body language and facial analysis of the driver. The driver drowsiness module is configured to analyze live multi-modal sensor inputs from sensors against at least one of i) a trained artificial intelligence model and ii) a rules based model while the driver is driving the vehicle to produce an output comprising a driver drowsiness-level estimation. A driver assistance module provides one or more positive assistance mechanisms to the driver to return the driver to be at or above the designated level of drowsiness.
    Type: Application
    Filed: December 19, 2017
    Publication date: May 6, 2021
    Inventors: Amir Tamrakar, Girish Acharya, Makoto Okabe, John James Bymes
  • Patent number: 10984027
    Abstract: Disclosed techniques can generate content object summaries. Content of a content object can be parsed into a set of word groups. For each word group, at least one topic to which the word group pertains can be identified and it can be determined, via a user model, at least one weight of the plurality of weights corresponding to the topic(s). For each word group, a score can be determined for the word group based on the weight(s). A subset of the set of word groups can be selected based on the scores for the word group. A summary of the content object can be generated that includes the subset but that does not include one or more other word groups in the set of word groups that are not in the subset. At least part of the summary of the content object can be output.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: April 20, 2021
    Assignee: SRI International
    Inventors: Girish Acharya, John Niekrasz, John Byrnes, Chih-Hung Yeh
  • Patent number: 10977452
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: April 13, 2021
    Assignee: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Publication number: 20210081056
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 18, 2021
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10884503
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: January 5, 2021
    Assignee: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10824310
    Abstract: A computing system for virtual personal assistance includes technologies to, among other things, correlate an external representation of an object with a real world view of the object, display virtual elements on the external representation of the object and/or display virtual elements on the real world view of the object, to provide virtual personal assistance in a multi-step activity or another activity that involves the observation or handling of an object and a reference document.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: November 3, 2020
    Assignee: SRI International
    Inventors: Girish Acharya, Rakesh Kumar, Supun Samarasekara, Louise Yarnall, Michael John Wolverton, Zhiwei Zhu, Ryan Villamil, Vlad Branzoi, James F. Carpenter, Glenn A. Murray
  • Publication number: 20200341114
    Abstract: An identification system includes a radar sensor configured to generate a time-domain or frequency-domain signal representative of electromagnetic waves reflected from one or more objects within a three-dimensional space over a period of time and a computation engine executing on one or more processors. The computation engine is configured to process the time-domain or frequency-domain signal to generate range and velocity data indicating motion by a living subject within the three-dimensional space. The computation engine is further configured to identify, based at least on the range and velocity data indicating the motion by the living subject, the living subject and output an indication of an identity of the living subject.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 29, 2020
    Applicant: SRI International
    Inventors: Girish Acharya, Douglas Bercow, John Brian Burns, Bradley J. Clymer, Aaron J. Heller, Jeffrey Lubin, Bhaskar Ramamurthy, David Watters, Aravind Sundaresan
  • Patent number: 10755713
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: August 25, 2020
    Assignee: SRI International
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Patent number: 10726846
    Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.
    Type: Grant
    Filed: June 3, 2017
    Date of Patent: July 28, 2020
    Assignee: SRI INTERNATIONAL
    Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
  • Patent number: 10573037
    Abstract: A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: February 25, 2020
    Assignee: SRI International
    Inventors: Rakesh Kumar, Supun Samarasekera, Girish Acharya, Michael John Wolverton, Necip Fazil Ayan, Zhiwei Zhu, Ryan Villamil
  • Publication number: 20190356844
    Abstract: Embodiments of the disclosed technologies include a method of capturing, using a mobile device, a best-focused image of a skin surface of a subject, the method including: setting a camera of the mobile device to a fixed focal length; capturing, using the camera, a current image of a plurality of images of the skin surface, the plurality of images having a sequence and including a first previous image captured, using the camera, previously to the current image and a second previous image captured, using the camera, previously to the first previous image; producing a modified image from the current image; transforming the modified image, using a Laplacian pyramid, to produce a plurality of first luminance values from the modified image and a plurality of second luminance values from the plurality of first luminance values; averaging a plurality of first squared values, each including a square of a corresponding first luminance value of the plurality of first luminance values, to produce a first energy value; av
    Type: Application
    Filed: August 2, 2019
    Publication date: November 21, 2019
    Inventors: David Chao Zhang, John Benjamin Southall, Michael Anthony Isnardi, Michael Raymond Piacentino, David Christopher Berends, Girish Acharya, Douglas A. Bercow, Aaron Spaulding, Sek Chai
  • Patent number: 10477095
    Abstract: Device logic in a mobile device configures a processor to capture a series of images, such as a video, using a consumer-grade camera, and to analyze the images to determine the best-focused image, of the series of images, that captures a region of interest. The images may be of a textured surface, such as facial skin of a mobile device user. The processor sets a focal length of the camera to a fixed position for collecting the images. The processor may guide the user to position the mobile device for capturing the images, using audible cues. For each image, the processor crops the image to the region of interest, extracts luminance information, and determines one or more energy levels of the luminance via a Laplacian pyramid. The energy levels may be filtered, and then are compared to energy levels of the other images to determine the best-focused image.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: November 12, 2019
    Assignee: SRI International
    Inventors: David Chao Zhang, John Benjamin Southall, Michael Anthony Isnardi, Michael Raymond Piacentino, David Christopher Berends, Girish Acharya, Douglas A. Bercow, Aaron Spaulding, Sek Chai
  • Publication number: 20190332680
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Application
    Filed: July 11, 2019
    Publication date: October 31, 2019
    Applicant: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Patent number: 10402501
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: September 3, 2019
    Assignee: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Publication number: 20190130912
    Abstract: A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.
    Type: Application
    Filed: December 21, 2018
    Publication date: May 2, 2019
    Inventors: Osher Yadgar, Neil Yorke-Smith, Bart Peintner, Gokhan Tur, Necip Fazil Ayan, Michael J. Wolverton, Girish Acharya, Venkatarama Satyanarayana Parimi, William S. Mark, Wen Wang, Andreas Kathol, Regis Vincent, Horacio E. Franco
  • Publication number: 20190114298
    Abstract: Disclosed techniques can generate content object summaries. Content of a content object can be parsed into a set of word groups. For each word group, at least one topic to which the word group pertains can be identified and it can be determined, via a user model, at least one weight of the plurality of weights corresponding to the topic(s). For each word group, a score can be determined for the word group based on the weight(s). A subset of the set of word groups can be selected based on the scores for the word group. A summary of the content object can be generated that includes the subset but that does not include one or more other word groups in the set of word groups that are not in the subset. At least part of the summary of the content object can be output.
    Type: Application
    Filed: November 11, 2016
    Publication date: April 18, 2019
    Inventors: Girish Acharya, John Niekrasz, John Byrnes, Chih-Hung Yeh