Patents by Inventor David Steinwedel

David Steinwedel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11997459
    Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: May 28, 2024
    Assignee: Smule, Inc.
    Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
  • Patent number: 11972748
    Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: April 30, 2024
    Assignee: SMULE, INC.
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20230335094
    Abstract: Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.
    Type: Application
    Filed: October 31, 2022
    Publication date: October 19, 2023
    Inventors: David Steinwedel, Perry R. Cook, Paul T. Chi, Wei Zhou, Jon Moldover, Anton Holmberg, Jingxi Li
  • Patent number: 11693616
    Abstract: User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: July 4, 2023
    Assignee: Smule, Inc.
    Inventors: Jon Moldover, David Steinwedel, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20230005462
    Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.
    Type: Application
    Filed: February 14, 2022
    Publication date: January 5, 2023
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Patent number: 11488569
    Abstract: Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 1, 2022
    Assignee: Smule, Inc.
    Inventors: David Steinwedel, Perry R. Cook, Paul T. Chi, Wei Zhou, Jon Moldover, Anton Holmberg, Jingxi Li
  • Publication number: 20220122573
    Abstract: Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.
    Type: Application
    Filed: June 3, 2021
    Publication date: April 21, 2022
    Inventors: David STEINWEDEL, Anton HOLMBERG, Javier VILLEGAS, Paul T. CHI, David YOUNG, Perry R. COOK
  • Publication number: 20220103958
    Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.
    Type: Application
    Filed: October 11, 2021
    Publication date: March 31, 2022
    Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
  • Publication number: 20220051448
    Abstract: Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.
    Type: Application
    Filed: December 3, 2019
    Publication date: February 17, 2022
    Inventors: David Steinwedel, Anton Holmberg, Javier Villegas, Paul T. Chi, David Young, Perry R. Cook
  • Patent number: 11250825
    Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: February 15, 2022
    Assignee: Smule, Inc.
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20220028362
    Abstract: User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.
    Type: Application
    Filed: March 8, 2021
    Publication date: January 27, 2022
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Patent number: 11146901
    Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: October 12, 2021
    Assignee: Smule, Inc.
    Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
  • Patent number: 10943574
    Abstract: User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: March 9, 2021
    Assignee: Smule, Inc.
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20210055905
    Abstract: User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments.
    Type: Application
    Filed: August 25, 2020
    Publication date: February 25, 2021
    Inventors: Jon Moldover, David Steinwedel, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20190354272
    Abstract: Embodiments described herein relate generally to graphical user interfaces for display screens of a musical composition authoring system presented on a display of a computing device.
    Type: Application
    Filed: May 21, 2018
    Publication date: November 21, 2019
    Inventors: David Steinwedel, Andrea Slobodien, Paul T. Chi, Perry R. Cook
  • Publication number: 20190355336
    Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.
    Type: Application
    Filed: July 1, 2019
    Publication date: November 21, 2019
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20190355337
    Abstract: User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 21, 2019
    Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
  • Publication number: 20190335283
    Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.
    Type: Application
    Filed: May 6, 2019
    Publication date: October 31, 2019
    Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
  • Patent number: 10284985
    Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: May 7, 2019
    Assignee: SMULE, INC.
    Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
  • Publication number: 20180374462
    Abstract: Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.
    Type: Application
    Filed: August 21, 2018
    Publication date: December 27, 2018
    Inventors: David Steinwedel, Perry R. Cook, Paul T. Chi, Wei Zhou, Jon Moldover, Anton Holmberg, Jingxi Li