Patents by Inventor Hiroyuki Ehara

Hiroyuki Ehara has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250150772
    Abstract: An acoustic signal processing method includes: obtaining: object information including first position information indicating a position of an object in a virtual space, first sound data indicating a first sound caused by the object, and first identification information indicating a processing method for the first sound data; and second position information indicating a position of a listener; calculating a distance between the object and the listener based on the first position information included in the object information obtained and the second position information obtained; determining, based on the first identification information included in the object information obtained, to process the first sound data using a first processing method for processing a loudness according to the distance calculated or a second processing method for processing the same in a different manner; processing the first sound data using the processing method determined; and outputting the first sound data processed.
    Type: Application
    Filed: January 8, 2025
    Publication date: May 8, 2025
    Inventors: Hikaru USAMI, Tomokazu Ishikawa, Seigo Enomoto, Kota Nakahashi, Hiroyuki Ehara, Mariko Yamada, Shuji Miyasaka
  • Publication number: 20250150771
    Abstract: An information generation method including: obtaining a generation position of a first wind blowing in a virtual space, a first wind direction of the first wind, and a first assumed wind speed which is a speed of the first wind; generating fourth object audio information in which the obtained generation position, first wind direction, and first assumed wind speed are associated; storing aerodynamic sound core information including a representative wind speed and aerodynamic sound data indicating aerodynamic sound generated by wind blowing at the representative wind speed reaching an ear of a listener in the virtual space; and outputting the generated fourth object audio information and the stored aerodynamic sound core information.
    Type: Application
    Filed: January 10, 2025
    Publication date: May 8, 2025
    Inventors: Hikaru USAMI, Tomokazu ISHIKAWA, Seigo ENOMOTO, Kota NAKAHASHI, Hiroyuki EHARA, Mariko YAMADA, Shuji MIYASAKA
  • Publication number: 20250150777
    Abstract: An information generation method includes: obtaining first sound data and first position information, the first sound data indicating a first sound, the first position information indicating a position of an object in the virtual space; and generating, from the obtained first sound data and first position information, first object audio information including (i) information related to the object that reproduces the first sound generated at a position of a listener due to the object, and (ii) the first position information.
    Type: Application
    Filed: January 10, 2025
    Publication date: May 8, 2025
    Inventors: Hikaru USAMI, Tomokazu ISHIKAWA, Seigo ENOMOTO, Kota NAKAHASHI, Hiroyuki EHARA, Mariko YAMADA, Shuji MIYASAKA
  • Publication number: 20250150770
    Abstract: An information generation method includes: obtaining a second wind direction of a second wind blowing in a virtual space and a second assumed wind speed which is a speed of the second wind; generating fifth object audio information in which the obtained second wind direction and second assumed wind speed are associated; storing aerodynamic sound core information including a representative wind speed and aerodynamic sound data indicating aerodynamic sound generated by wind blowing at the representative wind speed reaching an ear of a listener in the virtual space; and outputting the generated fifth object audio information and the stored aerodynamic sound core information.
    Type: Application
    Filed: January 10, 2025
    Publication date: May 8, 2025
    Inventors: Hikaru USAMI, Tomokazu Ishikawa, Seigo Enomoto, Kota Nakahashi, Hiroyuki Ehara, Mariko Yamada, Shuji Miyasaka
  • Publication number: 20250150776
    Abstract: An acoustic signal processing method includes: obtaining first position information indicating a position of an object that is a moving object in a virtual space, and second position information indicating a position of a listener in the virtual space; calculating a moving speed of the object based on the first position information obtained; calculating a distance between the object and the listener based on the first position information obtained and the second position information obtained; generating, based on the moving speed calculated and the distance calculated, an aerodynamic sound signal indicating an aerodynamic sound generated when wind caused by movement of the object reaches an ear of the listener; and outputting the aerodynamic sound signal generated.
    Type: Application
    Filed: January 8, 2025
    Publication date: May 8, 2025
    Inventors: Hikaru USAMI, Tomokazu Ishikawa, Seigo Enomoto, Kota Nakahashi, Hiroyuki Ehara, Mariko Yamada, Shuji Miyasaka
  • Patent number: 12230287
    Abstract: This quantization scale factor determination device is provided with a correction circuit which corrects an initial value of a quantization scale factor on the basis of whether or not an audio signal spectrum is sparse, and a search circuit which searches for a quantization scale factor on the basis of the initial value.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: February 18, 2025
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Akira Harada, Hiroyuki Ehara
  • Publication number: 20250039629
    Abstract: A three-dimensional audio processing method for use in reproducing three-dimensional audio using an augmented reality (AR) device includes: obtaining change information indicating change occurring in a space in which the AR device is located when content that includes a sound is being output in the AR device; selecting, based on the change information, one or more audio processes among a plurality of audio processes for rendering sound information indicating the sound; executing only the one or more audio processes selected among the plurality of audio processes; and rendering the sound information based on a first processing result of each of the one or more audio processes executed.
    Type: Application
    Filed: October 8, 2024
    Publication date: January 30, 2025
    Inventors: Mariko YAMADA, Tomokazu ISHIKAWA, Seigo ENOMOTO, Hikaru USAMI, Kota NAKAHASHI, Hiroyuki EHARA, Ko MIZUNO
  • Publication number: 20250031005
    Abstract: An information processing method includes: obtaining a position of a user within the three-dimensional sound field; determining a virtual boundary that includes two or more lattice points surrounding the user, based on the position of the user which has been obtained, the two or more lattice points being among a plurality of lattice points set at predetermined intervals within the three-dimensional sound reading field; the propagation characteristics of the sound from the sound source to the two or more lattice points included in the virtual boundary determined; calculating transfer functions of the sound from the two or more lattice points included in the virtual boundary determined to the position of the user; and generating the output sound signal by processing the sound information using the propagation characteristics read and the transfer functions calculated.
    Type: Application
    Filed: October 2, 2024
    Publication date: January 23, 2025
    Inventors: Seigo ENOMOTO, Hikaru USAMI, Kota NAKAHASHI, Hiroyuki EHARA, Mariko YAMADA, Ko MIZUNO, Tomokazu ISHIKAWA
  • Publication number: 20250031006
    Abstract: In an acoustic processing method, (i) sound information related to a sound including a predetermined sound and (ii) metadata including information related to a space in which the predetermined sound is reproduced are obtained; based on the sound information and the metadata, sound image localization enhancement processing of generating a first sound signal expressing a sound including a sound image localization enhancement reflected sound for localization as a sound arriving from a predetermined direction is performed; based on the sound information and the metadata, acoustic processing of generating a second sound signal expressing a sound including a sound other than a direct sound that reaches a user directly from a sound source object is performed; and an output sound signal obtained by compositing the first sound signal and the second sound signal is output.
    Type: Application
    Filed: October 4, 2024
    Publication date: January 23, 2025
    Inventors: Kota NAKAHASHI, Seigo ENOMOTO, Hikaru USAMI, Mariko YAMADA, Hiroyuki EHARA, Ko MIZUNO, Tomokazu ISHIKAWA
  • Publication number: 20250028500
    Abstract: A sound signal processing method includes: obtaining a sound signal; determining, for each of a plurality of sound processes executed in a pipeline, whether to execute the sound process on the sound signal, based on priority information indicating a priority associated with the sound signal; and executing each sound process determined to be executed in the determining, on the sound signal.
    Type: Application
    Filed: October 1, 2024
    Publication date: January 23, 2025
    Inventors: Seigo ENOMOTO, Hikaru USAMI, Kota NAKAHASHI, Hiroyuki EHARA, Mariko YAMADA, Tomokazu ISHIKAWA, Ko MIZUNO
  • Publication number: 20250031007
    Abstract: An acoustic processing method includes: obtaining (i) sound information related to a sound including a predetermined sound and (ii) metadata including information related to a space in which the predetermined sound is reproduced; performing, based on the sound information and the metadata, acoustic processing of generating a sound signal expressing a sound including an early reflection that reaches a user after a direct sound that reaches the user directly from a sound source object; and outputting an output sound signal including the sound signal. The acoustic processing includes: determining parameters for generating the early reflection, the parameters including a position, in the space, of a virtual sound source object that generates the early reflection; and generating the early reflection based on the parameters determined. The parameters include at least a parameter that varies over time according to a predetermined condition.
    Type: Application
    Filed: October 7, 2024
    Publication date: January 23, 2025
    Inventors: Kota NAKAHASHI, Seigo ENOMOTO, Hikaru USAMI, Mariko YAMADA, Hiroyuki EHARA, Ko MIZUNO, Tomokazu ISHIKAWA
  • Publication number: 20250024221
    Abstract: Each of one or more audio objects includes: sound data of a sound emitted from an object that corresponds to the audio object; and metadata that includes position information indicating a position of the object in a virtual sound space. A sound signal processing device includes: a selector that selects an audio object as a conversion target from among the one or more audio objects; and an fluctuation imparter that converts the audio object selected as the conversion target to impart, to the audio object selected, a fluctuation effect of fluctuating a sound emitted from an object that corresponds to the audio object converted when the sound signal is reproduced. The selector does not select an audio object that corresponds to an object whose position is moving in the virtual sound space based on a transition over time of the position information included in the metadata.
    Type: Application
    Filed: September 23, 2024
    Publication date: January 16, 2025
    Inventors: Seigo Enomoto, Hikaru Usami, Kota Nakahashi, Hiroyuki Ehara, Mariko Yamada, Tomokazu Ishikawa, Ko Mizuno
  • Publication number: 20250022478
    Abstract: An encoding device comprising: a quantization circuit that generates a quantization parameter that includes information about a vector quantization codebook; and a control circuit that sets the number of available bits according to conditions for encoding based on the difference between the number of bits available for encoding of the target sub-vector and the number of bits for the quantization parameter of the target sub-vector.
    Type: Application
    Filed: October 14, 2022
    Publication date: January 16, 2025
    Applicant: Panasonic Intellectual Property Corporation of America
    Inventors: Srikanth NAGISETTY, Chong Soon LIM, Hiroyuki EHARA, Akira HARADA
  • Patent number: 12062378
    Abstract: This encoding device is provided with a control circuit that, on the basis of information relating to the capability to convert the signal form of a sound signal in a decoding device for decoding encoded data of the sound signal, controls the conversion of the signal form of the sound signal, and an encoding circuit that encodes the sound signal in accordance to the conversion control.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: August 13, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Akira Harada, Hiroyuki Ehara, Toshiaki Sakurai
  • Publication number: 20240177723
    Abstract: An encoding apparatus includes: a quantization circuit that generates a quantization parameter including first information related to a codebook of vector quantization and second information related to code vectors included in the codebook; and a control circuit that determines which one of first encoding of the first information for the target sub-vector and second encoding of a second number of bits based on the difference between an allocated number of bits for vector quantization and the number of bits of the quantization parameter is to be executed, in accordance with the number of bits available for encoding sub-vectors including at least a target sub-vector among a plurality of sub-vectors in the vector quantization.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 30, 2024
    Applicant: Panasonic Intellectual Property Corporation of America
    Inventors: Srikanth NAGISETTY, Chong Soon LIM, Hiroyuki EHARA, Akira HARADA
  • Patent number: 11994605
    Abstract: Provided is a direction of arrival estimation device wherein: a calculation circuit calculates a frequency weighting factor for each of a plurality of frequency components of signals recorded in a microphone array, on the basis of the differences among unit vectors indicating the directions of the sound sources of each of the plurality of frequency components; and an estimation circuit estimates the direction of arrival of a signal from the sound source, on the basis of the frequency weighting factors.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: May 28, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Rohith Mars, Srikanth Nagisetty, Chong Soon Lim, Hiroyuki Ehara, Akihisa Kawamura
  • Publication number: 20240127830
    Abstract: This encoding device comprises: a downmix circuit that switches mixing processing according to the characteristic of an input stereo signal to generate either a first stereo signal or a second stereo signal obtained by mixing processing of a left channel signal and a right channel signal; a first encoding circuit that encodes the first stereo signal; and a second encoding circuit that encodes two signals included in the second stereo signal. The second encoding circuit performs monaural encoding on the basis of the encoding mode of the first encoding circuit in a first section in which switching from the first stereo signal to the second stereo signal is performed and/or a second section in which switching from the second stereo signal to the first stereo signal is performed.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 18, 2024
    Applicant: Panasonic Intellectual Property Corporation of America
    Inventors: Yuichi KAMIYA, Takuya KAWASHIMA, Akira HARADA, Hiroyuki EHARA
  • Publication number: 20240064483
    Abstract: This signal processing device is provided with a detection circuit for detecting a temporal variation in a time difference between channels of a stereo signal, and a control circuit for controlling the degree of smoothing of an inter-channel cross correlation function on the basis of the temporal variation in the time difference between the channels.
    Type: Application
    Filed: October 15, 2021
    Publication date: February 22, 2024
    Applicant: Panasonic Intellectual Property Corporation of America
    Inventors: Akira HARADA, Hiroyuki EHARA
  • Publication number: 20230306978
    Abstract: A coding apparatus includes: a first coding circuit that codes an input signal selectively using coding in a time domain or a frequency domain according to the characteristic of the input signal in a core layer; and a second coding circuit that codes an error in coding by the first coding circuit using a coding method corresponding to the domain type of coding used in the core layer in an extension layer for the core layer.
    Type: Application
    Filed: April 22, 2021
    Publication date: September 28, 2023
    Applicant: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Yuichi KAMIYA, Takuya KAWASHIMA, Hiroyuki EHARA, Akira HARADA
  • Publication number: 20230238012
    Abstract: An encoding device is provided with: a quantizing circuit which generates quantization parameters including first information on a vector quantization codebook, and second information on code vectors included in the codebook; and a control circuit which employs the second number of bits based on the difference between the first number of bits available for encoding of a sub-vector in the vector quantization, and the number of bits for the sub-vector quantization parameters, to control encoding of the first information with respect to the sub-vector.
    Type: Application
    Filed: April 22, 2021
    Publication date: July 27, 2023
    Applicant: Panasonic Intellectual Property Corporation of America
    Inventors: Srikanth NAGISETTY, Hiroyuki EHARA, Akira HARADA, Chong Soon LIM