Patents by Inventor Vinodth Kumar Mohanam

Vinodth Kumar Mohanam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220215837
    Abstract: This disclosure describes, in part, context-based device arbitration techniques to select a voice-enabled device from multiple voice-enabled devices to provide a response to a command included in a speech utterance of a user. In some examples, the context-driven arbitration techniques may include determining a ranked list of voice-enabled devices that are ranked based on audio signal metric values for audio signals generated by each voice-enabled device, and iteratively moving through the list to determine, based on device states of the voice-enabled devices, whether one of the voice-enabled devices can perform an action responsive to the command. If the voice-enabled devices that detected the speech utterance are unable to perform the action responsive to the command, all other voice-enabled devices associated with an account may be analyzed to determine whether one of the other voice-enabled devices can perform the action responsive to the command in the speech utterance.
    Type: Application
    Filed: March 22, 2022
    Publication date: July 7, 2022
    Inventors: Joseph White, Ravi Kiran Rachakonda, Vinodth Kumar Mohanam, Lalithkumar Rajendran, Deepak Uttam Shah, Maziyar Khorasani, Venkata Snehith Cherukuri
  • Patent number: 11289087
    Abstract: This disclosure describes, in part, context-based device arbitration techniques to select a voice-enabled device from multiple voice-enabled devices to provide a response to a command included in a speech utterance of a user. In some examples, the context-driven arbitration techniques may include determining a ranked list of voice-enabled devices that are ranked based on audio signal metric values for audio signals generated by each voice-enabled device, and iteratively moving through the list to determine, based on device states of the voice-enabled devices, whether one of the voice-enabled devices can perform an action responsive to the command. If the voice-enabled devices that detected the speech utterance are unable to perform the action responsive to the command, all other voice-enabled devices associated with an account may be analyzed to determine whether one of the other voice-enabled devices can perform the action responsive to the command in the speech utterance.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: March 29, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Joseph White, Lalithkumar Rajendran, Ravi Kiran Rachakonda, Venkata Snehith Cherukuri, Deepak Uttam Shah, Maziyar Khorasani, Vinodth Kumar Mohanam
  • Patent number: 11138977
    Abstract: This disclosure describes, in part, techniques for determining device groupings, or clusters, for multiple voice-enabled devices. The device clusters may be determined based on metadata data for audio signals (or audio data) generated by each of the multiple voice-enabled devices. For example, a remote system may analyze timestamp data for the audio signals received from the devices, and determine that the devices detected the same voice command of a user based on the timestamp data indicating that the audio signals were received within a threshold period of time from each other. Additionally, the remote system may analyze other metadata of the audio data, such as signal-to-noise (SNR) values, and determine that the SNR values are within a threshold value. The remote system may determine device clusters for the voice-enabled devices of a user based on these, and potentially other, types of metadata of the audio signals.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: October 5, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Venkata Snehith Cherukuri, Joseph White, Vinodth Kumar Mohanam, Rami Habal, Menghan Li
  • Publication number: 20200211554
    Abstract: This disclosure describes, in part, context-based device arbitration techniques to select a voice-enabled device from multiple voice-enabled devices to provide a response to a command included in a speech utterance of a user. In some examples, the context-driven arbitration techniques may include determining a ranked list of voice-enabled devices that are ranked based on audio signal metric values for audio signals generated by each voice-enabled device, and iteratively moving through the list to determine, based on device states of the voice-enabled devices, whether one of the voice-enabled devices can perform an action responsive to the command. If the voice-enabled devices that detected the speech utterance are unable to perform the action responsive to the command, all other voice-enabled devices associated with an account may be analyzed to determine whether one of the other voice-enabled devices can perform the action responsive to the command in the speech utterance.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 2, 2020
    Inventors: Joseph White, Lalithkumar Rajendran, Ravi Kiran Rachakonda, Venkata Snehith Cherukuri, Deepak Uttam Shah, Maziyar Khorasani, Vinodth Kumar Mohanam
  • Patent number: 10685652
    Abstract: This disclosure describes, in part, techniques for determining device groupings, or clusters, for multiple voice-enabled devices. The device clusters may be determined based on metadata data for audio signals (or audio data) generated by each of the multiple voice-enabled devices. For example, a remote system may analyze timestamp data for the audio signals received from the devices, and determine that the devices detected the same voice command of a user based on the timestamp data indicating that the audio signals were received within a threshold period of time from each other. Additionally, the remote system may analyze other metadata of the audio data, such as signal-to-noise (SNR) values, and determine that the SNR values are within a threshold value. The remote system may determine device clusters for the voice-enabled devices of a user based on these, and potentially other, types of metadata of the audio signals.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: June 16, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Venkata Snehith Cherukuri, Joseph White, Vinodth Kumar Mohanam, Rami Habal, Menghan Li
  • Patent number: 10546583
    Abstract: This disclosure describes, in part, context-based device arbitration techniques to select a voice-enabled device from multiple voice-enabled devices to provide a response to a command included in a speech utterance of a user. In some examples, the context-driven arbitration techniques may include determining a ranked list of voice-enabled devices that are ranked based on audio signal metric values for audio signals generated by each voice-enabled device, and iteratively moving through the list to determine, based on device states of the voice-enabled devices, whether one of the voice-enabled devices can perform an action responsive to the command. If the voice-enabled devices that detected the speech utterance are unable to perform the action responsive to the command, all other voice-enabled devices associated with an account may be analyzed to determine whether one of the other voice-enabled devices can perform the action responsive to the command in the speech utterance.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: January 28, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Joseph White, Ravi Kiran Rachakonda, Vinodth Kumar Mohanam, Lalithkumar Rajendran, Deepak Uttam Shah, Maziyar Khorasani, Venkata Snehith Cherukuri
  • Publication number: 20190066670
    Abstract: This disclosure describes, in part, context-based device arbitration techniques to select a voice-enabled device from multiple voice-enabled devices to provide a response to a command included in a speech utterance of a user. In some examples, the context-driven arbitration techniques may include determining a ranked list of voice-enabled devices that are ranked based on audio signal metric values for audio signals generated by each voice-enabled device, and iteratively moving through the list to determine, based on device states of the voice-enabled devices, whether one of the voice-enabled devices can perform an action responsive to the command. If the voice-enabled devices that detected the speech utterance are unable to perform the action responsive to the command, all other voice-enabled devices associated with an account may be analyzed to determine whether one of the other voice-enabled devices can perform the action responsive to the command in the speech utterance.
    Type: Application
    Filed: August 30, 2017
    Publication date: February 28, 2019
    Inventors: Joseph White, Ravi Kiran Rachakonda, Vinodth Kumar Mohanam, Lalithkumar Rajendran, Deepak Uttam Shah, Maziyar Khorasani, Venkata Snehith Cherukuri