Patents by Inventor Yuancheng Luo

Yuancheng Luo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11830471
    Abstract: Disclosed are techniques for performing ray-based acoustic modeling that models scattering of acoustic waves by a surface of a device. The acoustic modeling uses two parameters, a room response representing acoustics and geometry of a room and a device response representing acoustics and geometry of the device. The room response is determined using ray-based acoustic modeling, such as ray tracing. The device response can be measured in an actual environment or simulated and represents an acoustic response of the device to individual acoustic plane waves. The device applies a superposition of the room response and the plane wave scattering from the device response to determine acoustic pressure values and generate microphone audio data. The device can estimate room impulse response (RIR) data using data from the microphones, and can use the RIR data to perform audio processing such as sound equalization, acoustic echo cancellation, audio beamforming, and/or the like.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 28, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Mohamed Mansour, Wontak Kim, Yuancheng Luo
  • Patent number: 11128953
    Abstract: A system configured to improve spatial coverage of output audio and a corresponding user experience by performing upmixing and loudspeaker beamforming to stereo input signals. The system can perform upmixing to the stereo (e.g., two channel) input signal to extract a center channel and generate three-channel audio data. The system may then perform loudspeaker beamforming to the three-channel audio data to enable two loudspeakers to generate output audio having three distinct beams. The user may interpret the three distinct beams as originating from three separate locations, resulting in the user perceiving a wide virtual sound stage despite the loudspeakers being spaced close together on the device.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: September 21, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Yuancheng Luo, Wontak Kim, Mihir Dhananjay Shetye
  • Publication number: 20210120332
    Abstract: A system configured to improve spatial coverage of output audio and a corresponding user experience by performing upmixing and loudspeaker beamforming to stereo input signals. The system can perform upmixing to the stereo (e.g., two channel) input signal to extract a center channel and generate three-channel audio data. The system may then perform loudspeaker beamforming to the three-channel audio data to enable two loudspeakers to generate output audio having three distinct beams. The user may interpret the three distinct beams as originating from three separate locations, resulting in the user perceiving a wide virtual sound stage despite the loudspeakers being spaced close together on the device.
    Type: Application
    Filed: August 25, 2020
    Publication date: April 22, 2021
    Inventors: Yuancheng Luo, Wontak Kim, Mihir Dhananjay Shetye
  • Patent number: 10764676
    Abstract: A system configured to improve spatial coverage of output audio and a corresponding user experience by performing upmixing and loudspeaker beamforming to stereo input signals. The system can perform upmixing to the stereo (e.g., two channel) input signal to extract a center channel and generate three-channel audio data. The system may then perform loudspeaker beamforming to the three-channel audio data to enable two loudspeakers to generate output audio having three distinct beams. The user may interpret the three distinct beams as originating from three separate locations, resulting in the user perceiving a wide virtual sound stage despite the loudspeakers being spaced close together on the device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: September 1, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Yuancheng Luo, Wontak Kim, Mihir Dhananjay Shetye
  • Patent number: 10237676
    Abstract: This application describes methods of signal processing and spatial audio synthesis. One such method includes accepting an auditory signal and generating an impression of auditory virtual reality by processing the auditory signal to impute a spatial characteristic on it via convolution with a plurality of head-related impulse responses. The processing is performed in a series of steps, the steps including: performing a first convolution of an auditory signal with a characteristic-independent, mixed-sign filter and performing a second convolution of the result of first convolution with a characteristic-dependent, sparse, non-negative filter. In some described methods, the first convolution can be pre-computed and the second convolution can be performed in real-time, thereby resulting in a reduction of computational complexity in said methods of signal processing and spatial audio synthesis.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: March 19, 2019
    Assignee: University of Maryland, College Park
    Inventors: Yuancheng Luo, Ramani Duraiswami, Dmitry N. Zotkin
  • Publication number: 20180367936
    Abstract: This application describes methods of signal processing and spatial audio synthesis. One such method includes accepting an auditory signal and generating an impression of auditory virtual reality by processing the auditory signal to impute a spatial characteristic on it via convolution with a plurality of head-related impulse responses. The processing is performed in a series of steps, the steps including: performing a first convolution of an auditory signal with a characteristic-independent, mixed-sign filter and performing a second convolution of the result of first convolution with a characteristic-dependent, sparse, non-negative filter. In some described methods, the first convolution can be pre-computed and the second convolution can be performed in real-time, thereby resulting in a reduction of computational complexity in said methods of signal processing and spatial audio synthesis.
    Type: Application
    Filed: June 15, 2018
    Publication date: December 20, 2018
    Applicant: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Yuancheng Luo, Ramani Duraiswami, Dmitry N. Zotkin
  • Patent number: 10015616
    Abstract: This application describes methods of signal processing and spatial audio synthesis. One such method includes accepting an auditory signal and generating an impression of auditory virtual reality by processing the auditory signal to impute a spatial characteristic on it via convolution with a plurality of head-related impulse responses. The processing is performed in a series of steps, the steps including: performing a first convolution of an auditory signal with a characteristic-independent, mixed-sign filter and performing a second convolution of the result of first convolution with a characteristic-dependent, sparse, non-negative filter. In some described methods, the first convolution can be pre-computed and the second convolution can be performed in real-time, thereby resulting in a reduction of computational complexity in said methods of signal processing and spatial audio synthesis.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: July 3, 2018
    Assignee: University of Maryland, College Park
    Inventors: Yuancheng Luo, Ramani Duraiswami, Dmitry N. Zotkin
  • Patent number: 9681250
    Abstract: A system for generating and outputting three-dimensional audio data using head-related transfer functions (HRTFs) includes a processor configured to perform operations comprising: using a collection of previously measured HRTFs for audio signals corresponding to multiple directions for at least one subject; performing non-parametric Gaussian process hyper-parameter training on the collection of previously measured HRTFs to generate one or more predicted HRTFs that are different from the previously measured HRTFs; and generating and outputting three-dimensional audio data based on at least the one or more predicted HRTFs.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: June 13, 2017
    Assignee: University of Maryland, College Park
    Inventors: Yuancheng Luo, Ramani Duraiswami, Dmitry N. Zotkin
  • Publication number: 20150358755
    Abstract: This application describes methods of signal processing and spatial audio synthesis. One such method includes accepting an auditory signal and generating an impression of auditory virtual reality by processing the auditory signal to impute a spatial characteristic on it via convolution with a plurality of head-related impulse responses. The processing is performed in a series of steps, the steps including: performing a first convolution of an auditory signal with a characteristic-independent, mixed-sign filter and performing a second convolution of the result of first convolution with a characteristic-dependent, sparse, non-negative filter. In some described methods, the first convolution can be pre-computed and the second convolution can be performed in real-time, thereby resulting in a reduction of computational complexity in said methods of signal processing and spatial audio synthesis.
    Type: Application
    Filed: June 8, 2015
    Publication date: December 10, 2015
    Inventors: YUANCHENG LUO, RAMANI DURAISWAMI, DMITRY N. ZOTKIN
  • Publication number: 20150055783
    Abstract: A system is disclosed for statistical modelling, interpolation, and user-feedback based inference of head-related transfer functions (HRTF) includes a processor performing operations that include using a collection of previously measured head related transfer functions for audio signals corresponding to multiple directions for at least one subject; and performing Gaussian process hyper-parameter training on the collection of audio signals. A method is disclosed for statistical modelling, interpolation, measurement and anthropometry based prediction of head-related transfer functions (HRTF) for a virtual audio system that includes collecting audio signals in transform domain for at least one subject; applying head related transfer functions (HRTF) measurement directions in multiple directions to the collected audio signals; and performing Gaussian hyper-parameter training on the collection of audio signals to generate at least one predicted HRTF.
    Type: Application
    Filed: May 27, 2014
    Publication date: February 26, 2015
    Inventors: Yuancheng Luo, Ramani Duraiswami, Dmitry N. Zotkin