Patents by Inventor Richard Mitic

Richard Mitic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200135224
    Abstract: An audio cancellation system includes a voice enabled computing system that is connected to an audio output device using a wired or wireless communication network. The voice enabled computing device can provide media content to a user and receive a voice command from the user. The connection between the voice enabled computing system and the audio output device introduces a time delay between the media content being generated at the voice enabled computing device and the media content being reproduced at the audio output device. The system operates to determine a calibration value adapted for the voice enabled computing system and the audio output device. The system uses the calibration value to filter the user's voice command from a recording of ambient sound including the media content, without requiring significant use of memory and computing resources.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 30, 2020
    Applicant: Spotify AB
    Inventors: Daniel Bromand, Richard Mitic
  • Patent number: 10629204
    Abstract: Utterance-based user interfaces can include activation trigger processing techniques for detecting activation triggers and causing execution of certain commands associated with particular command pattern activation triggers without waiting for output from a separate speech processing engine. The activation trigger processing techniques can also detect speech analysis patterns and selectively activate a speech processing engine.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: April 21, 2020
    Assignee: SPOTIFY AB
    Inventor: Richard Mitic
  • Patent number: 10622007
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: April 14, 2020
    Assignee: SPOTIFY AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Patent number: 10621983
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: April 14, 2020
    Assignee: SPOTIFY AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Patent number: 10566010
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: February 18, 2020
    Assignee: SPOTIFY AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Publication number: 20200026489
    Abstract: Systems, methods, and devices for human-machine interfaces for utterance-based playlist selection are disclosed. In one method, a list of playlists is traversed and a portion of each is audibly output until a playlist command is received. Based on the playlist command, the traversing is stopped and a playlist is selected for playback. In examples, the list of playlists is modified based on a modification input.
    Type: Application
    Filed: July 8, 2019
    Publication date: January 23, 2020
    Applicant: Spotify AB
    Inventors: Daniel BROMAND, Richard MITIC, Horia JURCUT, Henriette Susanne Martine CRAMER, Ruth BRILLMAN
  • Publication number: 20190342357
    Abstract: A system is provided for streaming media content in a vehicle. The system includes a personal media streaming appliance system configured to connect to a media delivery system and receive media content from the media delivery system at least via a cellular network. The personal media streaming appliance system includes one or more preset buttons for playing media content associated with the preset buttons. Data about the preset buttons and the media content associated with the preset buttons can be stored in the media delivery system.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Richard Mitic, Horia Jurcut, Daniel Bromand, David Gustafsson
  • Publication number: 20190341038
    Abstract: A system and method for voice control of a media playback device is disclosed. The method includes receiving an instruction of a voice command, converting the voice command to text, transmitting the text command to the playback device, and having the playback device execute the command. An instruction may include a command to play a set of audio tracks, and the media playback device plays the set of audio tracks upon receiving the instruction.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Daniel Bromand, Richard Mitic, Horia Jurcut, Jennifer Thom-Santelli, Henriette Cramer, Karl Humphreys, Bo Williams, Kurt Jacobson, Henrik Lindström
  • Publication number: 20190342600
    Abstract: A system is provided for streaming media content in a vehicle. The system includes a personal media streaming appliance system configured to connect to a media delivery system and receive media content from the media delivery system at least via a cellular network. The personal media streaming appliance system includes one or more preset buttons for playing media content associated with the preset buttons. The media contents associated with the preset buttons are automatically determined to be personalized to the user of the system.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Daniel Bromand, Richard Mitic, David Gustafsson, Horia Jurcut
  • Publication number: 20190341037
    Abstract: A system and method for voice control of a media playback device is disclosed. The method includes receiving an instruction of a voice command, converting the voice command to text, transmitting the text command to the playback device, and having the playback device execute the command. An instruction may include a command to play a set of audio tracks, and the media playback device plays the set of audio tracks upon receiving the instruction.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Daniel Bromand, Richard Mitic, Horia Jurcut, Jennifer Thom-Santelli, Henriette Cramer, Karl Humphreys, Bo Williams, Kurt Jacobson, Henrik Lindström
  • Publication number: 20190332350
    Abstract: A system is provided for streaming media content in a vehicle. The system includes a personal media streaming appliance system configured to connect to a media delivery system and receive media content from the media delivery system at least via a cellular network. The personal media streaming appliance system operates to transmit a media signal representative to the received media content to a vehicle media playback system so that the vehicle media playback system operates to play the media content in the vehicle. Various types of rotations of a knob part of the personal media streaming applicant system result in different media playback actions.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 31, 2019
    Applicant: Spotify AB
    Inventors: Daniel BROMAND, Richard MITIC, Johan OSKARSSON
  • Publication number: 20190325869
    Abstract: Utterance-based user interfaces can include activation trigger processing techniques for detecting activation triggers and causing execution of certain commands associated with particular command pattern activation triggers without waiting for output from a separate speech processing engine. The activation trigger processing techniques can also detect speech analysis patterns and selectively activate a speech processing engine.
    Type: Application
    Filed: October 3, 2018
    Publication date: October 24, 2019
    Applicant: Spotify AB
    Inventor: Richard MITIC
  • Publication number: 20190325867
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Daniel BROMAND, David GUSTAFSSON, Richard MITIC, Sarah MENNICKEN
  • Publication number: 20190325870
    Abstract: Utterance-based user interfaces can include activation trigger processing techniques for detecting activation triggers and causing execution of certain commands associated with particular command pattern activation triggers without waiting for output from a separate speech processing engine. The activation trigger processing techniques can also detect speech analysis patterns and selectively activate a speech processing engine.
    Type: Application
    Filed: October 3, 2018
    Publication date: October 24, 2019
    Applicant: Spotify AB
    Inventor: Richard MITIC
  • Publication number: 20190325895
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Daniel BROMAND, David GUSTAFSSON, Richard MITIC, Sarah MENNICKEN
  • Publication number: 20190325896
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Daniel BROMAND, David GUSTAFSSON, Richard MITIC, Sarah MENNICKEN
  • Publication number: 20190325120
    Abstract: A server has a pool data store that stores ambient sound recordings for matching. A match engine finds matches between ambient sound recordings from devices in the pool data store. The matching ambient sound recordings and their respective devices are then analyzed to determine which device is a source device that provides credentials and which device is a target device that receives credentials. The server then obtains or generates credentials associated with the source device and provides the credentials to the target device. The target device accesses content or services of an account using the credentials.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Thorbiörn Fritzon, Richard Mitic
  • Publication number: 20190325866
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Daniel BROMAND, David GUSTAFSSON, Richard MITIC, Sarah MENNICKEN
  • Publication number: 20190318070
    Abstract: A source device being associated with an account uses playback of a media content item to cause a target device to become associated with the account. The target device enters an association mode and records a portion of the playing content. The target device provides the recording to a server that identifies the song (e.g., using a music fingerprint service) and uses the identification of the song to find the account that caused playback of the identified song. With the account identified, the server provides credentials of the account to target system. The target device accesses content or services using the account. As confirmation of receiving the credentials, the server causes playback of the content to transition to from the source device to the target device.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 17, 2019
    Applicant: Spotify AB
    Inventors: Richard Mitic, Göran Edling
  • Publication number: 20190318069
    Abstract: A source device being associated with an account uses playback of a media content item to cause a target device to become associated with the account. The target device enters an association mode and records a portion of the playing content. The target device provides the recording to a server that identifies the song (e.g., using a music fingerprint service) and uses the identification of the song to find the account that caused playback of the identified song. With the account identified, the server provides credentials of the account to target system. The target device accesses content or services using the account. As confirmation of receiving the credentials, the server causes playback of the content to transition to from the source device to the target device.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventors: Richard MITIC, Göran EDLING