Patents by Inventor Ashwin Dharne

Ashwin Dharne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230018473
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. Sensor data is received at a device, including an utterance representing a speech of a user engaged in a dialogue with the device. The speech of the user is determined based on the utterance and a response to the user is searched by a local dialogue manager residing on the device against a sub-dialogue tree stored on the device. The response, if identified from the sub-dialogue tree, is rendered to the user in response to the speech. A request is sent to a server for the response, if the response is not available in the sub-dialogue tree.
    Type: Application
    Filed: September 26, 2022
    Publication date: January 19, 2023
    Inventor: Ashwin Dharne
  • Patent number: 11468885
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. A request is received by a server from a device for a response to be directed to a user engaged in a dialogue with the device. The request includes information related to a current state of the dialogue. The response is determined based on a dialogue tree and the information related to the current state of the dialogue. A sub-dialogue tree, which corresponds to a portion of the dialogue tree, is then created based on the response and the dialogue tree and is then used to generate a local dialogue manager for the device. The response, the sub-dialogue tree, and the local dialogue manager are then sent to the device, wherein the local dialogue manager, once deployed on the device, is capable of driving the dialogue with the user based on the sub-dialogue tree on the device.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: October 11, 2022
    Assignee: DMAI, INC.
    Inventor: Ashwin Dharne
  • Patent number: 11455986
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. Sensor data is received at a device, including an utterance representing a speech of a user engaged in a dialogue with the device. The speech of the user is determined based on the utterance and a response to the user is searched by a local dialogue manager residing on the device against a sub-dialogue tree stored on the device. The response, if identified from the sub-dialogue tree, is rendered to the user in response to the speech. A request is sent to a server for the response, if the response is not available in the sub-dialogue tree.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: September 27, 2022
    Assignee: DMAI, INC.
    Inventor: Ashwin Dharne
  • Publication number: 20220101856
    Abstract: The present teaching relates to method, system, medium, and implementations for detecting a source of speech sound in a dialogue. A visual signal acquired from a dialogue scene is first received, where the visual signal captures a person present in the dialogue scene. A human lip associated with the person is detected from the visual signal and tracked to detect whether lip movement is observed. If lip movement is detected, a first candidate source of sound is generated corresponding to an area in the dialogue scene where the lip movement occurred.
    Type: Application
    Filed: December 13, 2021
    Publication date: March 31, 2022
    Inventors: Nishant Shukla, Ashwin Dharne
  • Patent number: 11200902
    Abstract: The present teaching relates to method, system, medium, and implementations for detecting a source of speech sound in a dialogue. A visual signal acquired from a dialogue scene is first received, where the visual signal captures a person present in the dialogue scene. A human lip associated with the person is detected from the visual signal and tracked to detect whether lip movement is observed. If lip movement is detected, a first candidate source of sound is generated corresponding to an area in the dialogue scene where the lip movement occurred.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: December 14, 2021
    Assignee: DMAI, INC.
    Inventors: Nishant Shukla, Ashwin Dharne
  • Patent number: 11017779
    Abstract: The present teaching relates to method, system, medium, and implementations for speech recognition. An audio signal is received that represents a speech of a user engaged in a dialogue. A visual signal is received that captures the user uttering the speech. A first speech recognition result is obtained by performing audio based speech recognition based on the audio signal. Based on the visual signal, lip movement of the user is detected and a second speech recognition result is obtained by performing lip reading based speech recognition. The first and the second speech recognition results are then integrated to generate an integrated speech recognition result.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: May 25, 2021
    Assignee: DMAI, INC.
    Inventors: Nishant Shukla, Ashwin Dharne
  • Publication number: 20190279642
    Abstract: The present teaching relates to method, system, medium, and implementations for speech recognition. An audio signal is received that represents a speech of a user engaged in a dialogue. A visual signal is received that captures the user uttering the speech. A first speech recognition result is obtained by performing audio based speech recognition based on the audio signal. Based on the visual signal, lip movement of the user is detected and a second speech recognition result is obtained by performing lip reading based speech recognition. The first and the second speech recognition results are then integrated to generate an integrated speech recognition result.
    Type: Application
    Filed: February 15, 2019
    Publication date: September 12, 2019
    Inventors: Nishant Shukla, Ashwin Dharne
  • Publication number: 20190251966
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. A request is received at a server from a device for a response to be directed to a user engaged in a dialogue between the user and the device, where the request includes information related to a current state of the dialogue. The response is identified based on a predicted dialogue path and predicted responses in accordance with the information related to the current state of the dialogue, where the predicted dialogue path and the predicted responses have been preemptively generated previously based on a dialogue tree. The response, once identified from the predicted dialogue path and the predicted responses, is sent to the device.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventor: Ashwin Dharne
  • Publication number: 20190251957
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. A request is received from a device for a response to be directed to a user engaged in a dialogue between the user and the device, where the request includes information related to a current state of the dialogue. The response is identified from a dialogue tree based on the information related to the current state of the dialogue. A local predicted dialogue path is preemptively generated based on the response, the dialogue tree, and/or the information related to the current state of the dialogue. Local predicted responses are preemptively generated based on the predicted dialogue path. A local dialogue manager is generated based on the local predicted dialogue path and the local predicted responses. The local predicted dialogue path, the local predicted responses, and the local dialogue manager are then sent to the device.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventor: Ashwin Dharne
  • Publication number: 20190251350
    Abstract: The present teaching relates to method, system, medium, and implementations for determining a type of a scene. Image data acquired by a camera with respect to a scene are received and one or more objects present in the scene are detected therefrom. The detected objects are recognized based on object recognition models. The spatial relationships among the detected objects are then determined based on the image data. The recognized objects and their spatial relationships are then used to infer a type of the scene in accordance with at least one scene context-free grammar model.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventors: Nishant Shukla, Ashwin Dharne
  • Publication number: 20190251970
    Abstract: The present teaching relates to method, system, medium, and implementations for detecting a source of speech sound in a dialogue. A visual signal acquired from a dialogue scene is first received, where the visual signal captures a person present in the dialogue scene. A human lip associated with the person is detected from the visual signal and tracked to detect whether lip movement is observed. If lip movement is detected, a first candidate source of sound is generated corresponding to an area in the dialogue scene where the lip movement occurred.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventors: Nishant Shukla, Ashwin Dharne
  • Publication number: 20190251956
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. Information related to a dialogue is received at a device, where a user is engaged in the dialogue with the device. A local dialogue manager residing on the device searches a response to be directed to the user with respect to predicted responses associated with a predicted dialogue path stored on the device based on the information related to the dialogue. The predicted dialogue path, the predicted responses, and the local dialogue manager are preemptively generated based on a dialogue tree residing on a server. If the response is identified by the local dialogue manager, the response is transmitted to the device. If the response is not identified by the local dialogue manager, the device sends a request to the server for the response.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventor: Ashwin Dharne
  • Publication number: 20190251965
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. A request is received by a server from a device for a response to be directed to a user engaged in a dialogue with the device. The request includes information related to a current state of the dialogue. The response is determined based on a dialogue tree and the information related to the current state of the dialogue. A sub-dialogue tree, which corresponds to a portion of the dialogue tree, is then created based on the response and the dialogue tree and is then used to generate a local dialogue manager for the device. The response, the sub-dialogue tree, and the local dialogue manager are then sent to the device, wherein the local dialogue manager, once deployed on the device, is capable of driving the dialogue with the user based on the sub-dialogue tree on the device.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventor: Ashwin Dharne
  • Publication number: 20190251964
    Abstract: The present teaching relates to method, system, medium, and implementations for managing a user machine dialogue. Sensor data is received at a device, including an utterance representing a speech of a user engaged in a dialogue with the device. The speech of the user is determined based on the utterance and a response to the user is searched by a local dialogue manager residing on the device against a sub-dialogue tree stored on the device. The response, if identified from the sub-dialogue tree, is rendered to the user in response to the speech. A request is sent to a server for the response, if the response is not available in the sub-dialogue tree.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventor: Ashwin Dharne