Patents by Inventor Todd F. Mozer

Todd F. Mozer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10880833
    Abstract: Smart listening modes for supporting quasi always-on listening on an electronic device are provided. In one embodiment, the electronic device can determine that a user is likely to utter a voice trigger in order to access the always-on listening functionality of the electronic device. In response to this determination, the electronic device can automatically enable the always-on listening functionality. Similarly, the electronic device can determine that a user is no longer likely to utter the voice trigger in order to access the always-on listening functionality of the electronic device. In response to this second determination, the electronic device can automatically disable the always-on listening functionality.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: December 29, 2020
    Assignee: Sensory, Incorporated
    Inventors: Todd F. Mozer, Bryan Pellom
  • Patent number: 10705789
    Abstract: Techniques for implementing dynamic volume adjustment by a virtual assistant are provided. In one embodiment, the virtual assistant can receive a voice query or command from a user, recognize the content of the voice query or command, process the voice query or command based on the recognized content, and determine an auditory response to be output to the user. The virtual assistant can then identify a plurality of criteria for automatically determining an output volume level for the response, where the plurality of criteria including content-based criteria and environment-based criteria, calculate values for the plurality of criteria, and combine the values to determine the output volume level. The virtual assistant can subsequently cause the auditory response to be output to the user at the determined output volume level.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: July 7, 2020
    Assignee: Sensory, Incorporated
    Inventor: Todd F. Mozer
  • Patent number: 10582167
    Abstract: Techniques for automatically triggering video surveillance using embedded voice, speech, or sound recognition are provided. In one embodiment, a computer system can receive an audio signal captured from an area to be monitored via video surveillance. The computer system can further recognize, via an embedded recognition component, a voice, speech phrase, or environmental sound in the audio signal, and can determine that the recognized voice, speech phrase, or environmental sound corresponds to a predefined trigger condition. The computer system can then automatically transmit a signal to one or more video capture devices to begin video recording of the area.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: March 3, 2020
    Assignee: SENSORY, INC.
    Inventor: Todd F. Mozer
  • Publication number: 20200034108
    Abstract: Techniques for implementing dynamic volume adjustment by a virtual assistant are provided. In one embodiment, the virtual assistant can receive a voice query or command from a user, recognize the content of the voice query or command, process the voice query or command based on the recognized content, and determine an auditory response to be output to the user. The virtual assistant can then identify a plurality of criteria for automatically determining an output volume level for the response, where the plurality of criteria including content-based criteria and environment-based criteria, calculate values for the plurality of criteria, and combine the values to determine the output volume level. The virtual assistant can subsequently cause the auditory response to be output to the user at the determined output volume level.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 30, 2020
    Inventor: Todd F. MOZER
  • Patent number: 10248770
    Abstract: Techniques for unobtrusively verifying the identity of a user of a computing device are provided. In one embodiment, the computing device can establish one or more verification models for verifying the user's identity, where at least a subset of the one or more verification models is based on enrollment data that is collected in an unobtrusive manner from the user. The computing device can then verify the user's identity using the one or more verification models.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: April 2, 2019
    Assignee: Sensory, Incorporated
    Inventors: John-Paul Hosom, Todd F. Mozer, Pieter J. Vermeulen, Bryan L. Pellom
  • Patent number: 10235573
    Abstract: Techniques for performing low-fidelity always-on A/V monitoring are provided. In one embodiment, an always-on A/V monitoring system can record audio or video footage of an area of interest on a continuous basis while operating in a low-fidelity recording mode, where the recorded audio or video footage has a quality level that is sufficient to detect one or more events that have meaning to the system or a user, but is insufficient to recognize details with respect to the area of interest that would be considered private to an individual appearing in, or associated with, the recorded audio or video footage.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: March 19, 2019
    Assignee: Sensory, Incorporated
    Inventors: Bryan Pellom, Todd F. Mozer
  • Patent number: 10152974
    Abstract: Techniques for implementing unobtrusive training for speaker verification are provided. In one embodiment, an electronic device can receive a plurality of voice samples uttered by one or more users as they interact with a voice command-and-control feature of the electronic device and, for each voice sample, assign the voice sample to one of a plurality of voice type categories. The electronic device can further group the voice samples assigned to each voice type category into one or more user sets, where each user set comprises voice samples likely to have been uttered by a unique user. The electronic device can then, for each user set: (1) generate a voice model, (2) issue, to the unique user, a request to provide an identity or name, and (3) label the voice model with the identity or name provided by the unique user.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: December 11, 2018
    Assignee: Sensory, Incorporated
    Inventors: Todd F. Mozer, Bryan Pellom
  • Patent number: 10037756
    Abstract: Techniques for analyzing long-term audio recordings are provided. In one embodiment, a computing device can record audio captured from an environment of a user on a long-term basis (e.g., on the order of weeks, months, or years). The computing device can store the recorded audio on a local or remote storage device. The computing device can then analyze the recorded audio based one or more predefined rules and can enable one or more actions based on that analysis.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: July 31, 2018
    Assignee: Sensory, Incorporated
    Inventors: Bryan Pellom, Todd F. Mozer
  • Publication number: 20180084228
    Abstract: Techniques for performing low-fidelity always-on A/V monitoring are provided. In one embodiment, an always-on A/V monitoring system can record audio or video footage of an area of interest on a continuous basis while operating in a low-fidelity recording mode, where the recorded audio or video footage has a quality level that is sufficient to detect one or more events that have meaning to the system or a user, but is insufficient to recognize details with respect to the area of interest that would be considered private to an individual appearing in, or associated with, the recorded audio or video footage.
    Type: Application
    Filed: September 20, 2016
    Publication date: March 22, 2018
    Inventors: Bryan Pellom, Todd F. Mozer
  • Patent number: 9916832
    Abstract: Techniques for leveraging a combination of audio-based and vision-based cues for voice command-and-control are provided. In one embodiment, an electronic device can identify one or more audio-based cues in a received audio signal that pertain to a possible utterance of a predefined trigger phrase, and identify one or more vision-based cues in a received video signal that pertain to a possible utterance of the predefined trigger phrase. The electronic device can further determine a degree of synchronization or correspondence between the one or more audio-based cues and the one or more vision-based cues. The electronic device can then conclude, based on the one or more audio-based cues, the one or more vision-based cues, and the degree of synchronization or correspondence, whether the predefined trigger phrase was actually spoken.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 13, 2018
    Assignee: Sensory, Incorporated
    Inventor: Todd F. Mozer
  • Publication number: 20170311261
    Abstract: Smart listening modes for supporting quasi always-on listening on an electronic device are provided. In one embodiment, the electronic device can determine that a user is likely to utter a voice trigger in order to access the always-on listening functionality of the electronic device. In response to this determination, the electronic device can automatically enable the always-on listening functionality. Similarly, the electronic device can determine that a user is no longer likely to utter the voice trigger in order to access the always-on listening functionality of the electronic device. In response to this second determination, the electronic device can automatically disable the always-on listening functionality.
    Type: Application
    Filed: March 20, 2017
    Publication date: October 26, 2017
    Inventors: Todd F. Mozer, Bryan Pellom
  • Publication number: 20170301353
    Abstract: Techniques for implementing unobtrusive training for speaker verification are provided. In one embodiment, an electronic device can receive a plurality of voice samples uttered by one or more users as they interact with a voice command-and-control feature of the electronic device and, for each voice sample, assign the voice sample to one of a plurality of voice type categories. The electronic device can further group the voice samples assigned to each voice type category into one or more user sets, where each user set comprises voice samples likely to have been uttered by a unique user. The electronic device can then, for each user set: (1) generate a voice model, (2) issue, to the unique user, a request to provide an identity or name, and (3) label the voice model with the identity or name provided by the unique user.
    Type: Application
    Filed: March 13, 2017
    Publication date: October 19, 2017
    Inventors: Todd F. Mozer, Bryan Pellom
  • Publication number: 20170287470
    Abstract: Techniques for analyzing long-term audio recordings are provided. In one embodiment, a computing device can record audio captured from an environment of a user on a long-term basis (e.g., on the order of weeks, months, or years). The computing device can store the recorded audio on a local or remote storage device. The computing device can then analyze the recorded audio based one or more predefined rules and can enable one or more actions based on that analysis.
    Type: Application
    Filed: March 29, 2016
    Publication date: October 5, 2017
    Inventors: Bryan Pellom, Todd F. Mozer
  • Publication number: 20170243581
    Abstract: Techniques for leveraging a combination of audio-based and vision-based cues for voice command-and-control are provided. In one embodiment, an electronic device can identify one or more audio-based cues in a received audio signal that pertain to a possible utterance of a predefined trigger phrase, and identify one or more vision-based cues in a received video signal that pertain to a possible utterance of the predefined trigger phrase. The electronic device can further determine a degree of synchronization or correspondence between the one or more audio-based cues and the one or more vision-based cues. The electronic device can then conclude, based on the one or more audio-based cues, the one or more vision-based cues, and the degree of synchronization or correspondence, whether the predefined trigger phrase was actually spoken.
    Type: Application
    Filed: February 18, 2016
    Publication date: August 24, 2017
    Inventor: Todd F. Mozer
  • Patent number: 9716593
    Abstract: Techniques for leveraging multiple biometrics for enabling user access to security metadata are provided. In one embodiment, a computing device can receive first and second biometric identifiers from a user. The computing device can further determine, via a multi-biometric authentication system, that the user's identity can be verified using the first biometric identifier, but cannot be, or has not been, verified using the second biometric identifier. In response to this determination, the computing device can provide information to the user for facilitating verification of the user's identity using the second biometric identifier.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: July 25, 2017
    Assignee: Sensory, Incorporated
    Inventor: Todd F. Mozer
  • Publication number: 20170064262
    Abstract: Techniques for automatically triggering video surveillance using embedded voice, speech, or sound recognition are provided. In one embodiment, a computer system can receive an audio signal captured from an area to be monitored via video surveillance. The computer system can further recognize, via an embedded recognition component, a voice, speech phrase, or environmental sound in the audio signal, and can determine that the recognized voice, speech phrase, or environmental sound corresponds to a predefined trigger condition. The computer system can then automatically transmit a signal to one or more video capture devices to begin video recording of the area.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventor: Todd F. Mozer
  • Patent number: 9484028
    Abstract: In one embodiment the present invention includes a method comprising receiving an acoustic input signal and processing the acoustic input signal with a plurality of acoustic recognition processes configured to recognize the same target sound. Different acoustic recognition processes start processing different segments of the acoustic input signal at different time points in the acoustic input signal. In one embodiment, initial states in the recognition processes may be configured on each time step.
    Type: Grant
    Filed: February 19, 2014
    Date of Patent: November 1, 2016
    Assignee: Sensory, Incorporated
    Inventors: Pieter J. Vermeulen, Jonathan Shaw, Todd F. Mozer
  • Patent number: 9432193
    Abstract: Techniques for implementing face-based authentication with situational adaptivity are provided. In one embodiment, a computing device can create an enrollment template for a user, the enrollment template being derived from one or more enrollment images of the user's face and being usable by a face-based authentication system to authenticate the user's identity. The computing device can further determine a first set of metadata associated with the enrollment image(s) and can store the first set of metadata with the enrollment template. At a later time (e.g., an authentication event), the computing device can capture an input image of the user's face, determine a second set of metadata associated with the input image, and calculate a computational distance between the input image and the enrollment template, the calculating taking into account a degree of difference between the first and second sets of metadata. Finally, the user can be authenticated based on the distance.
    Type: Grant
    Filed: February 5, 2015
    Date of Patent: August 30, 2016
    Assignee: Sensory, Incorporated
    Inventors: Todd F. Mozer, Bryan Pellom
  • Publication number: 20160234024
    Abstract: Techniques for leveraging multiple biometrics for enabling user access to security metadata are provided. In one embodiment, a computing device can receive first and second biometric identifiers from a user. The computing device can further determine, via a multi-biometric authentication system, that the user's identity can be verified using the first biometric identifier, but cannot be, or has not been, verified using the second biometric identifier. In response to this determination, the computing device can provide information to the user for facilitating verification of the user's identity using the second biometric identifier.
    Type: Application
    Filed: February 11, 2015
    Publication date: August 11, 2016
    Inventor: Todd F. Mozer
  • Publication number: 20160234023
    Abstract: Techniques for implementing face-based authentication with situational adaptivity are provided. In one embodiment, a computing device can create an enrollment template for a user, the enrollment template being derived from one or more enrollment images of the user's face and being usable by a face-based authentication system to authenticate the user's identity. The computing device can further determine a first set of metadata associated with the enrollment image(s) and can store the first set of metadata with the enrollment template. At a later time (e.g., an authentication event), the computing device can capture an input image of the user's face, determine a second set of metadata associated with the input image, and calculate a computational distance between the input image and the enrollment template, the calculating taking into account a degree of difference between the first and second sets of metadata. Finally, the user can be authenticated based on the distance.
    Type: Application
    Filed: February 5, 2015
    Publication date: August 11, 2016
    Inventors: Todd F. Mozer, Bryan Pellom