Patents by Inventor David Steinwedel
David Steinwedel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11997459Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.Type: GrantFiled: October 11, 2021Date of Patent: May 28, 2024Assignee: Smule, Inc.Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
-
Patent number: 11972748Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.Type: GrantFiled: February 14, 2022Date of Patent: April 30, 2024Assignee: SMULE, INC.Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20230335094Abstract: Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.Type: ApplicationFiled: October 31, 2022Publication date: October 19, 2023Inventors: David Steinwedel, Perry R. Cook, Paul T. Chi, Wei Zhou, Jon Moldover, Anton Holmberg, Jingxi Li
-
Patent number: 11693616Abstract: User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments.Type: GrantFiled: August 25, 2020Date of Patent: July 4, 2023Assignee: Smule, Inc.Inventors: Jon Moldover, David Steinwedel, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20230005462Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.Type: ApplicationFiled: February 14, 2022Publication date: January 5, 2023Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Patent number: 11488569Abstract: Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.Type: GrantFiled: August 21, 2018Date of Patent: November 1, 2022Assignee: Smule, Inc.Inventors: David Steinwedel, Perry R. Cook, Paul T. Chi, Wei Zhou, Jon Moldover, Anton Holmberg, Jingxi Li
-
Publication number: 20220122573Abstract: Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.Type: ApplicationFiled: June 3, 2021Publication date: April 21, 2022Inventors: David STEINWEDEL, Anton HOLMBERG, Javier VILLEGAS, Paul T. CHI, David YOUNG, Perry R. COOK
-
Publication number: 20220103958Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.Type: ApplicationFiled: October 11, 2021Publication date: March 31, 2022Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
-
Publication number: 20220051448Abstract: Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.Type: ApplicationFiled: December 3, 2019Publication date: February 17, 2022Inventors: David Steinwedel, Anton Holmberg, Javier Villegas, Paul T. Chi, David Young, Perry R. Cook
-
Patent number: 11250825Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.Type: GrantFiled: July 1, 2019Date of Patent: February 15, 2022Assignee: Smule, Inc.Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20220028362Abstract: User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.Type: ApplicationFiled: March 8, 2021Publication date: January 27, 2022Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Patent number: 11146901Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.Type: GrantFiled: May 6, 2019Date of Patent: October 12, 2021Assignee: Smule, Inc.Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
-
Patent number: 10943574Abstract: User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.Type: GrantFiled: May 21, 2019Date of Patent: March 9, 2021Assignee: Smule, Inc.Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20210055905Abstract: User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments.Type: ApplicationFiled: August 25, 2020Publication date: February 25, 2021Inventors: Jon Moldover, David Steinwedel, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20190354272Abstract: Embodiments described herein relate generally to graphical user interfaces for display screens of a musical composition authoring system presented on a display of a computing device.Type: ApplicationFiled: May 21, 2018Publication date: November 21, 2019Inventors: David Steinwedel, Andrea Slobodien, Paul T. Chi, Perry R. Cook
-
Publication number: 20190355336Abstract: User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc.Type: ApplicationFiled: July 1, 2019Publication date: November 21, 2019Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20190355337Abstract: User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.Type: ApplicationFiled: May 21, 2019Publication date: November 21, 2019Inventors: David Steinwedel, Andrea Slobodien, Jeffrey C. Smith, Perry R. Cook
-
Publication number: 20190335283Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.Type: ApplicationFiled: May 6, 2019Publication date: October 31, 2019Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
-
Patent number: 10284985Abstract: Latency on different devices (e.g., devices of differing brand, model, vintage, etc.) can vary significantly and tens of milliseconds can affect human perception of lagging and leading components of a performance. As a result, use of a uniform latency estimate across a wide variety of devices is unlikely to provide good results, and hand-estimating round-trip latency across a wide variety of devices is costly and would constantly need to be updated for new devices. Instead, a system has been developed for crowdsourcing latency estimates.Type: GrantFiled: June 9, 2016Date of Patent: May 7, 2019Assignee: SMULE, INC.Inventors: Amanda Chaudhary, David Steinwedel, John Shimmin, Lance Jabr, Randal Leistikow
-
Publication number: 20180374462Abstract: Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.Type: ApplicationFiled: August 21, 2018Publication date: December 27, 2018Inventors: David Steinwedel, Perry R. Cook, Paul T. Chi, Wei Zhou, Jon Moldover, Anton Holmberg, Jingxi Li