Patents by Inventor Takuya Yoshioka

Takuya Yoshioka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220010414
    Abstract: A high-strength member having excellent delayed fracture resistance, a method for manufacturing the high-strength member, and a method for manufacturing a steel sheet for the high-strength member. The high-strength member has a bent ridge portion obtained by using a steel sheet having a tensile strength of 1470 MPa or more, an edge surface of the bent ridge portion has a residual stress of 800 MPa or less, and a longest crack among cracks that extend from the edge surface of the bent ridge portion in a bent ridge direction D1 has a length of 10 ?m or less.
    Type: Application
    Filed: September 25, 2019
    Publication date: January 13, 2022
    Applicant: JFE STEEL CORPORATION
    Inventors: Takuya HIRASHIMA, Shimpei YOSHIOKA, Shinjiro KANEKO
  • Patent number: 11220529
    Abstract: A method of producing a transgenic silkworm that spins bagworm silks and producing a large quantity of bagworm silks by transgenic technology is developed and provided. A gene encoding a modified bagworm Fib H and a transgenic silkworm in which the gene is introduced, wherein the gene is obtained by cloning a gene fragment encoding a bagworm Fib H-like polypeptide comprising a partial amino acid sequence of bagworm Fib H, and fusing the gene fragment to a gene fragment encoding silkworm-derived Fib H, are provided.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: January 11, 2022
    Assignee: NATIONAL AGRICULTURE AND FOOD RESEARCH ORGANIZATION
    Inventors: Naoyuki Yonemura, Tetsuya Iizuka, Kenichi Nakajima, Takuya Tsubota, Takao Suzuki, Hideki Sezutsu, Tsunenori Kameda, Taiyo Yoshioka
  • Publication number: 20220002827
    Abstract: A high-strength steel sheet having high delayed fracture resistance and a method for manufacturing the high-strength steel sheet. The high-strength steel sheet has a specified chemical composition. Relative to the whole microstructure of the steel sheet, the total area fraction of at least one of (i) bainite containing carbide grains having an average grain size of 50 nm or less and (ii) martensite containing carbide grains having an average grain size of 50 nm or less is 90% or more. The average number of inclusions having an average grain size of 5 ?m or more that are present in a section of the steel sheet perpendicular to a rolling direction is 5.0/mm2 or less.
    Type: Application
    Filed: September 25, 2019
    Publication date: January 6, 2022
    Applicant: JFE STEEL CORPORATION
    Inventors: Takuya HIRASHIMA, Shimpei YOSHIOKA, Shinjiro KANEKO
  • Publication number: 20210407516
    Abstract: A computer implemented method includes receiving audio signals representative of speech via multiple audio streams transmitted from corresponding multiple distributed devices, performing, via a neural network model, continuous speech separation for one or more of the received audio signals having overlapped speech, and providing the separated speech on a fixed number of separate output audio channels.
    Type: Application
    Filed: September 13, 2021
    Publication date: December 30, 2021
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20210309776
    Abstract: Provided is a fluorinated polymer that can impart excellent washing durability and water- and oil-repellency to fibers, said fluorinated polymer having a repeating unit derived from a fluorinated monomer (a) that comprises a first fluorinated monomer (a1) represented by the formula: CH2?C(—X1)—C(?O)—Y1—Z1—Rf1 [wherein X1 represents a halogen atom; Y1 represents —O— or —NH—; Z1 represents a direct bond or a bivalent organic group; and Rf1 represents a fluoroalkyl group having 1 to 20 carbon atoms] and a second fluorinated monomer (a2) represented by the formula: CH2?C(—X2)—C(?O)—Y2—Z2—Rf2 [wherein X2 represents a monovalent organic group or a hydrogen atom; Y2 represents —O— or —NH—; Z2 represents a direct bond or a bivalent organic group; and Rf2 represents a fluoroalkyl group having 1 to 20 carbon atoms].
    Type: Application
    Filed: June 21, 2021
    Publication date: October 7, 2021
    Applicant: DAIKIN INDUSTRIES, LTD.
    Inventors: Shinichi MINAMI, Masaki Fukumori, Takashi Enomoto, Takuya Yoshioka, Ikuo Yamamoto, Bin Zhou, Min Zhu
  • Patent number: 11138980
    Abstract: A computer implemented method includes receiving audio signals representative of speech via multiple audio streams transmitted from corresponding multiple distributed devices, performing, via a neural network model, continuous speech separation for one or more of the received audio signals having overlapped speech, and providing the separated speech on a fixed number of separate output audio channels.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: October 5, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20210193161
    Abstract: To begin with, an acoustic model training apparatus extracts speech features representing speech characteristics, and calculates an acoustic-condition feature representing a feature of an acoustic condition of the speech data using an acoustic-condition calculation model that is represented as a neural network, based on an acoustic-condition calculation model parameter characterizing the acoustic-condition calculation model. The acoustic model training apparatus then generates an adjusted parameter that is an acoustic model parameter adjusted based on the acoustic-condition feature, the acoustic model parameter characterizing an acoustic model represented as a neural network to which an output layer of the acoustic-condition calculation model is coupled. The acoustic model training apparatus then updates the acoustic model parameter based on the adjusted parameter and the speech features, and updates the acoustic-condition calculation model parameters based on the adjusted parameter and the speech features.
    Type: Application
    Filed: January 26, 2017
    Publication date: June 24, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHPNE CORPORATION
    Inventors: Marc DELCROIX, Keisuke KINOSHITA, Astunori OGAWA, Takuya YOSHIOKA, Tomohiro NAKATANI
  • Patent number: 11023690
    Abstract: Systems and methods for providing customized output based on a user preference in a distributed system are provided. In example embodiments, a meeting server or system receives audio streams from a plurality of distributed devices involved in an intelligent meeting. The meeting system identifies a user corresponding to a distributed device of the plurality of distributed devices and determines a preferred language of the user. A transcript from the received audio streams is generated. The meeting system translates the transcript into the preferred language of the user to form a translated transcript. The translated transcript is provided to the distributed device of the user.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Patent number: 10957337
    Abstract: This document relates to separation of audio signals into speaker-specific signals. One example obtains features reflecting mixed speech signals captured by multiple microphones. The features can be input a neural network and masks can be obtained from the neural network. The masks can be applied one or more of the mixed speech signals captured by one or more of the microphones to obtain two or more separate speaker-specific speech signals, which can then be output.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: March 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zhuo Chen, Hakan Erdogan, Takuya Yoshioka, Fileno A. Alleva, Xiong Xiao
  • Publication number: 20210076129
    Abstract: A system and method include reception of a first plurality of audio signals, generation of a second plurality of beamformed audio signals based on the first plurality of audio signals, each of the second plurality of beamformed audio signals associated with a respective one of a second plurality of beamformer directions, generation of a first TF mask for a first output channel based on the first plurality of audio signals, determination of a first beamformer direction associated with a first target sound source based on the first TF mask, generation of first features based on the first beamformer direction and the first plurality of audio signals, determination of a second TF mask based on the first features, and application of the second TF mask to one of the second plurality of beamformed audio signals associated with the first beamformer direction.
    Type: Application
    Filed: November 17, 2020
    Publication date: March 11, 2021
    Inventors: Zhuo CHEN, Changliang LIU, Takuya YOSHIOKA, Xiong XIAO, Hakan ERDOGAN, Dimitrios Basile DIMITRIADIS
  • Patent number: 10856076
    Abstract: A system and method include reception of a first plurality of audio signals, generation of a second plurality of beamformed audio signals based on the first plurality of audio signals, each of the second plurality of beamformed audio signals associated with a respective one of a second plurality of beamformer directions, generation of a first TF mask for a first output channel based on the first plurality of audio signals, determination of a first beamformer direction associated with a first target sound source based on the first TF mask, generation of first features based on the first beamformer direction and the first plurality of audio signals, determination of a second TF mask based on the first features, and application of the second TF mask to one of the second plurality of beamformed audio signals associated with the first beamformer direction.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: December 1, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Zhuo Chen, Changliang Liu, Takuya Yoshioka, Xiong Xiao, Hakan Erdogan, Dimitrios Basile Dimitriadis
  • Patent number: 10839822
    Abstract: Representative embodiments disclose mechanisms to separate and recognize multiple audio sources (e.g., picking out individual speakers) in an environment where they overlap and interfere with each other. The architecture uses a microphone array to spatially separate out the audio signals. The spatially filtered signals are then input into a plurality of separators, so each signal is input into a corresponding signal. The separators use neural networks to separate out audio sources. The separators typically produce multiple output signals for the single input signals. A post selection processor then assesses the separator outputs to pick the signals with the highest quality output. These signals can be used in a variety of systems such as speech recognition, meeting transcription and enhancement, hearing aids, music information retrieval, speech enhancement and so forth.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: November 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zhuo Chen, Jinyu Li, Xiong Xiao, Takuya Yoshioka, Huaming Wang, Zhenghao Wang, Yifan Gong
  • Publication number: 20200349954
    Abstract: A computer implemented method includes receiving audio signals representative of speech via multiple audio streams transmitted from corresponding multiple distributed devices, performing, via a neural network model, continuous speech separation for one or more of the received audio signals having overlapped speech, and providing the separated speech on a fixed number of separate output audio channels.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20200349230
    Abstract: Systems and methods for providing customized output based on a user preference in a distributed system are provided. In example embodiments, a meeting server or system receives audio streams from a plurality of distributed devices involved in an intelligent meeting. The meeting system identifies a user corresponding to a distributed device of the plurality of distributed devices and determines a preferred language of the user. A transcript from the received audio streams is generated. The meeting system translates the transcript into the preferred language of the user to form a translated transcript. The translated transcript is provided to the distributed device of the user.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20200349953
    Abstract: A computer implemented method includes receiving information streams on a meeting server from a set of multiple distributed devices included in a meeting, receiving audio signals representative of speech by at least two users in at least two of the information streams, receiving at least one video signal of at least one user in the information streams, associating a specific user with speech in the received audio signals as a function of the received audio and video signals, and generating a transcript of the meeting with an indication of the specific user associated with the speech.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Lijuan Qin, Nanshan Zeng, Dimitrios Basile Dimitriadis, Zhuo Chen, Andreas Stolcke, Takuya Yoshioka, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20200351603
    Abstract: A computer implemented method includes receiving multiple channels of audio from three or more microphones detecting speech from a meeting of multiple users, localizing speech sources to determine an approximate direction of arrival of speech from a user, using a speech unmixing model to select two channels corresponding to a primary and a secondary microphone, and sending the two selected channels to a meeting server for generation of a speaker attributed meeting transcript.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: William Isaac Hinthorn, Lijuan Qin, Nanshan Zeng, Dimitrios Basile Dimitriadis, Zhuo Chen, Andreas Stolcke, Takuya Yoshioka, Xuedong Huang
  • Publication number: 20200349950
    Abstract: A computer implemented method processes audio streams recorded during a meeting by a plurality of distributed devices.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20200349949
    Abstract: A computer implemented method includes receiving audio streams at a meeting server from two distributed devices that are streaming audio captured during an ad-hoc meeting between at least two users, comparing the received audio streams to determine that the received audio streams are representative of sound from the ad-hoc meeting, generating a meeting instance to process the audio streams in response to the comparing determining that the audio streams are representative of sound from the ad-hoc meeting, and processing the received audio streams to generate a transcript of the ad-hoc meeting.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
  • Publication number: 20200335119
    Abstract: Embodiments are associated with determination of a first plurality of multi-dimensional vectors, each of the first plurality of multi-dimensional vectors representing speech of a target speaker, determination of a multi-dimensional vector representing a speech signal of two or more speakers, determination of a weighted vector representing speech of the target speaker based on the first plurality of multi-dimensional vectors and on similarities between the multi-dimensional vector and each of the first plurality of multi-dimensional vectors, and extraction of speech of the target speaker from the speech signal based on the weighted vector and the speech signal.
    Type: Application
    Filed: June 7, 2019
    Publication date: October 22, 2020
    Inventors: Xiong XIAO, Zhuo CHEN, Takuya YOSHIOKA, Changliang LIU, Hakan ERDOGAN, Dimitrios Basile DIMITRIADIS, Yifan GONG, James Garnet Droppo, III
  • Patent number: 10812921
    Abstract: A computer implemented method includes receiving multiple channels of audio from three or more microphones detecting speech from a meeting of multiple users, localizing speech sources to determine an approximate direction of arrival of speech from a user, using a speech unmixing model to select two channels corresponding to a primary and a secondary microphone, and sending the two selected channels to a meeting server for generation of a speaker attributed meeting transcript.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: October 20, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: William Isaac Hinthorn, Lijuan Qin, Nanshan Zeng, Dimitrios Basile Dimitriadis, Zhuo Chen, Andreas Stolcke, Takuya Yoshioka, Xuedong Huang