Patents by Inventor Jeannie Yang

Jeannie Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170011724
    Abstract: Notwithstanding practical limitations imposed by mobile device platforms and applications, truly captivating musical instruments may be synthesized in ways that allow musically expressive performances to be captured and rendered in real-time. Synthetic musical instruments that provide a game, grading or instructional mode are described in which one or more qualities of a user's performance are assessed relative to a musical score. By providing a range of modes (from score-assisted to fully user-expressive), user interactions with synthetic musical instruments are made more engaging and tend to capture user interest over generally longer periods of time. Synthetic musical instruments are described in which force dynamics of user gestures (such as finger contact forces applied to a multi-touch sensitive display or surface and/or the temporal extent and applied pressure of sustained contact thereon) are captured and drive the digital synthesis in ways that enhance expressiveness of user performances.
    Type: Application
    Filed: September 26, 2016
    Publication date: January 12, 2017
    Inventors: Perry R. Cook, Jeannie Yang, Yar Woo, John Shimmin, Randal Leistikow, Michael Berger, Jeff Smith
  • Publication number: 20160358595
    Abstract: Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.
    Type: Application
    Filed: June 3, 2016
    Publication date: December 8, 2016
    Inventors: Kevin Sung, Bona Kim, Jon Moldover, John Shimmin, Jeannie Yang, Perry R. Cook
  • Publication number: 20150379974
    Abstract: Synthetic multi-string musical instruments have been developed for capturing and rendering musical performances on handheld or other portable devices in which a multi-touch sensitive display provides one of the input vectors for an expressive performance by a user or musician. Visual cues may be provided on the multi-touch sensitive display to guide the user in a performance based on a musical score. Alternatively, or in addition, uncued freestyle modes of operation may be provided. In either case, it is not the musical score that drives digital synthesis and audible rendering of the synthetic multi-string musical instrument. Rather, it is the stream of user gestures captured at least in part using the multi-touch sensitive display that drives the digital synthesis and audible rendering.
    Type: Application
    Filed: July 3, 2014
    Publication date: December 31, 2015
    Inventors: Ge Wang, Jeannie Yang, Jieun Oh, Tom Lieber
  • Publication number: 20150120308
    Abstract: An application that manipulates audio (or audiovisual) content, automated music creation technologies may be employed to generate new musical content using digital signal processing software hosted on handheld and/or server (or cloud-based) compute platforms to intelligently process and combine a set of audio content captured and submitted by users of modern mobile phones or other handheld compute platforms. The user-submitted recordings may contain speech, singing, musical instruments, or a wide variety of other sound sources, and the recordings may optionally be preprocessed by the handheld devices prior to submission.
    Type: Application
    Filed: December 31, 2014
    Publication date: April 30, 2015
    Inventors: Randal Leistikow, Mark Godfrey, Ian S. Simon, Jeannie Yang
  • Publication number: 20140318347
    Abstract: Synthetic multi-string musical instruments have been developed for capturing and rendering musical performances on handheld or other portable devices in which a multi-touch sensitive display provides one of the input vectors for an expressive performance by a user or musician. Visual cues may be provided on the multi-touch sensitive display to guide the user in a performance based on a musical score. Alternatively, or in addition, uncued freestyle modes of operation may be provided. In either case, it is not the musical score that drives digital synthesis and audible rendering of the synthetic multi-string musical instrument. Rather, it is the stream of user gestures captured at least in part using the multi-touch sensitive display that drives the digital synthesis and audible rendering.
    Type: Application
    Filed: July 3, 2014
    Publication date: October 30, 2014
    Inventors: Ge Wang, Jeannie Yang, Jieun Oh, Tom Lieber
  • Patent number: 8772621
    Abstract: Synthetic multi-string musical instruments have been developed for capturing and rendering musical performances on handheld or other portable devices in which a multi-touch sensitive display provides one of the input vectors for an expressive performance by a user or musician. Visual cues may be provided on the multi-touch sensitive display to guide the user in a performance based on a musical score. Alternatively, or in addition, uncued freestyle modes of operation may be provided. In either case, it is not the musical score that drives digital synthesis and audible rendering of the synthetic multi-string musical instrument. Rather, it is the stream of user gestures captured at least in part using the multi-touch sensitive display that drives the digital synthesis and audible rendering.
    Type: Grant
    Filed: November 9, 2011
    Date of Patent: July 8, 2014
    Assignee: Smule, Inc.
    Inventors: Ge Wang, Jeannie Yang, Jieun Oh, Tom Lieber
  • Patent number: 8682653
    Abstract: Techniques have been developed to facilitate the capture performances on handheld or other portable computing devices and, in some cases, the pitch-correction and mixing of such vocal performances with backing tracks for audible rendering on such devices. Captivating visual animations and/or facilities for listener comment and ranking are provided in association with an audible rendering of a performance, e.g., a vocal performance captured and pitch-corrected at another similarly configured mobile device and mixed with backing instrumentals and/or vocals. Geocoding of captured vocal performances and/or listener feedback may facilitate animations or display artifacts in ways that are suggestive of a performance or endorsement emanating from a particular geographic locale on a user manipulable globe. In this way, implementations of the described functionality can transform otherwise mundane mobile devices into social instruments that foster a unique sense of global connectivity and community.
    Type: Grant
    Filed: September 4, 2010
    Date of Patent: March 25, 2014
    Assignee: Smule, Inc.
    Inventors: Spencer Salazar, Rebecca A. Fiebrink, Ge Wang, Mattias Ljungström, Jeffrey C. Smith, Jeannie Yang
  • Publication number: 20140039883
    Abstract: Vocal musical performances may be captured and, in some cases or embodiments, pitch-corrected and/or processed in accord with a user selectable vocal effects schedule for mixing and rendering with backing tracks in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured on mobile devices in the context of a karaoke-style presentation of lyrics in correspondence with audible renderings of a backing track. Such performances can be pitch-corrected in real-time at the mobile device in accord with pitch correction settings. Vocal effects schedules may also be selectively applied to such performances. In these ways, even amateur user/performers with imperfect pitch are encouraged to take a shot at “stardom” and/or take part in a game play, social network or vocal achievement application architecture that facilitates musical collaboration on a global scale and/or, in some cases or embodiments, to initiate revenue generating in-application transactions.
    Type: Application
    Filed: August 6, 2013
    Publication date: February 6, 2014
    Applicant: Smule, Inc.
    Inventors: Jeannie Yang, Nicholas M. Kruge, Gregory C. Thompson, Perry R. Cook
  • Publication number: 20120174736
    Abstract: Synthetic multi-string musical instruments have been developed for capturing and rendering musical performances on handheld or other portable devices in which a multi-touch sensitive display provides one of the input vectors for an expressive performance by a user or musician. Visual cues may be provided on the multi-touch sensitive display to guide the user in a performance based on a musical score. Alternatively, or in addition, uncued freestyle modes of operation may be provided. In either case, it is not the musical score that drives digital synthesis and audible rendering of the synthetic multi-string musical instrument. Rather, it is the stream of user gestures captured at least in part using the multi-touch sensitive display that drives the digital synthesis and audible rendering.
    Type: Application
    Filed: November 9, 2011
    Publication date: July 12, 2012
    Applicant: SMULE, INC.
    Inventors: Ge Wang, Jeannie Yang, Jieun Oh, Tom Lieber
  • Publication number: 20120089390
    Abstract: Vocal musical performances may be captured and pitch corrected and supplied to telephony targets such as conventional voice terminal equipment (telephone handsets, answering machines, etc.), wireless telephony devices and information services wherein particular device or subscriber targets are identifiable using telephone numbers or alphanumeric IDs (e.g., mobile phones with or without text/multimedia messaging support, VoIP terminals, answering or voicemail services, ASP-based telephony services, etc.) and/or telco or premises-based telephony equipment, such as switches, with support for customizable ringback tones.
    Type: Application
    Filed: August 27, 2011
    Publication date: April 12, 2012
    Applicant: Smule, Inc.
    Inventors: Jeannie Yang, Michael Wang
  • Publication number: 20110144983
    Abstract: Techniques have been developed to facilitate the capture performances on handheld or other portable computing devices and, in some cases, the pitch-correction and mixing of such vocal performances with backing tracks for audible rendering on such devices. Captivating visual animations and/or facilities for listener comment and ranking are provided in association with an audible rendering of a performance, e.g., a vocal performance captured and pitch-corrected at another similarly configured mobile device and mixed with backing instrumentals and/or vocals. Geocoding of captured vocal performances and/or listener feedback may facilitate animations or display artifacts in ways that are suggestive of a performance or endorsement emanating from a particular geographic locale on a user manipulable globe. In this way, implementations of the described functionality can transform otherwise mundane mobile devices into social instruments that foster a unique sense of global connectivity and community.
    Type: Application
    Filed: September 4, 2010
    Publication date: June 16, 2011
    Inventors: Spencer Salazar, Rebecca A. Fiebrink, Ge Wang, Mattias Ljungström, Jeffrey C. Smith, Jeannie Yang