Patents by Inventor Zhifeng Chen
Zhifeng Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200380952Abstract: A method includes receiving an input text sequence to be synthesized into speech in a first language and obtaining a speaker embedding, the speaker embedding specifying specific voice characteristics of a target speaker for synthesizing the input text sequence into speech that clones a voice of the target speaker. The target speaker includes a native speaker of a second language different than the first language. The method also includes generating, using a text-to-speech (TTS) model, an output audio feature representation of the input text by processing the input text sequence and the speaker embedding. The output audio feature representation includes the voice characteristics of the target speaker specified by the speaker embedding.Type: ApplicationFiled: April 22, 2020Publication date: December 3, 2020Applicant: Google LLCInventors: Yu Zhang, Ron J. Weiss, Byungha Chun, Yonghui Wu, Zhifeng Chen, Russell John Wyatt Skerry-Ryan, Ye Jia, Andrew M. Rosenberg, Bhuvana Ramabhadran
-
Publication number: 20200311038Abstract: An example distributed database includes a first instance and a second instance. The first instance is configured to: responsive to performing, within a scope of a database update transaction, a first database update operation, invalidate a cache entry residing in the first database cache maintained by the first instance, wherein the first database update operation is reflected by a transaction log maintained by the first instance; perform, within the scope of the database update transaction, a second database update operation to insert an identifier of the cache entry into a predetermined table of the distributed database, wherein the second database update operation is reflected by the transaction log; and responsive to committing the database update transaction, transmit the transaction log to the second instance.Type: ApplicationFiled: April 25, 2019Publication date: October 1, 2020Inventors: Xinfeng Zhang, Mengxin Ye, Zhifeng Chen, Xiaokai Wu
-
Publication number: 20200299194Abstract: Provided are a granulated blast-furnace slag activator and a method of manufacturing the same. The granulated blast-furnace slag activator includes, in percent by weight, the following raw materials: 62% to 95% of gypsum and 5% to 38% of high belite sulfoaluminate cement clinker. Also provided is a method of manufacturing cement by mixing the granulated blast-furnace slag activator with granulated blast-furnace slag at a certain ratio.Type: ApplicationFiled: March 21, 2019Publication date: September 24, 2020Applicant: Tangshan Polar Bear Building Materials Co., Ltd.Inventors: Jian ZHOU, Zhifeng CHEN, Zhenqiu ZHANG, Zhongxi GE, Shujuan ZHANG, Qiao CHEN, Chengjian LIU
-
Patent number: 10713593Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.Type: GrantFiled: December 29, 2016Date of Patent: July 14, 2020Assignee: Google LLCInventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Patent number: 10679148Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model. An exemplary system applying implicit bridging for machine learning tasks, as described in this specification, trains a machine learning model to perform certain types of machine learning tasks without requiring explicit training data for the certain types of machine learning tasks to be used during training.Type: GrantFiled: May 3, 2019Date of Patent: June 9, 2020Assignee: Google LLCInventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Publication number: 20200160836Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.Type: ApplicationFiled: November 14, 2019Publication date: May 21, 2020Inventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
-
Publication number: 20200139462Abstract: A miter saw that includes a base, a worktable arranged on the base and defining a worktable plane, and a cutting head formed with or connected to an operating member operable by a user. The cutting head further includes a circular saw blade operative to rotate around a first axis and a motor operative to drive the circular saw blade. A fence is arranged on the worktable. The cutting head is further connected to a first guiding member configured for guiding chips to be discharged. The fence is formed with a guiding portion. The cutting head is operative to rotate around a second axis parallel to the worktable plane and, when the cutting head rotates around the second axis, the guiding portion is operative to guide the first guiding member to cross the fence.Type: ApplicationFiled: September 17, 2019Publication date: May 7, 2020Inventors: Zhifeng Chen, Yinglu Ai, Guigong Ni
-
Publication number: 20200133217Abstract: A control method includes sending, by a controller, a created context-aware model to a context-aware engine. The context-aware model is used to define a preset control performed when target data meets a trigger condition and to instruct the context-aware engine to send indication information to the controller when the context-aware engine determines that the target data meets the trigger condition. The preset control is used to implement a context-aware function. The indication information is used to indicate that the target data meets the trigger condition. The method also includes receiving, by the controller, the indication information. The method further includes performing, by the controller, the preset control based on the indication information.Type: ApplicationFiled: December 30, 2019Publication date: April 30, 2020Inventors: Xiaotao DENG, Qing LUO, Zhifeng CHEN, Tao HAN, Junfei ZENG
-
Publication number: 20200098350Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.Type: ApplicationFiled: November 26, 2019Publication date: March 26, 2020Inventors: Samuel Bengio, Yuxuan Wang, Zongheng Yang, Zhifeng Chen, Yonghui Wu, Ioannis Agiomyrgiannakis, Ron J. Weiss, Navdeep Jaitly, Ryan M. Rifkin, Robert Andrew James Clark, Quoc V. Le, Russell J. Ryan, Ying Xiao
-
Publication number: 20200099585Abstract: A SDN orchestration method includes: obtaining a first request for creating a first logical switch; creating a control plane instance of the first logical switch, and sending first configuration information to instruct the first forwarding device to configure the data plane instance of the first logical switch; obtaining a second request for connecting the first logical switch to a first logical router; sending second configuration information to instruct the first forwarding device to configure a first port of the data plane instance of the first logical switch to be communicatively connected to a second port of a data plane instance of the first logical router on the second forwarding device configured with the data plane instance of the first logical router; and sending third configuration information to instruct the second forwarding device to configure the second port to be communicatively connected to the first port.Type: ApplicationFiled: November 27, 2019Publication date: March 26, 2020Inventors: Zhifeng CHEN, Xuefeng WU, Weisheng WANG, Chenghao LI
-
Patent number: 10573293Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.Type: GrantFiled: June 20, 2019Date of Patent: February 25, 2020Assignee: Google LLCInventors: Samuel Bengio, Yuxuan Wang, Zongheng Yang, Zhifeng Chen, Yonghui Wu, Ioannis Agiomyrgiannakis, Ron J. Weiss, Navdeep Jaitly, Ryan M. Rifkin, Robert Andrew James Clark, Quoc V. Le, Russell J. Ryan, Ying Xiao
-
Publication number: 20200043483Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.Type: ApplicationFiled: August 1, 2019Publication date: February 6, 2020Inventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
-
Publication number: 20200034436Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.Type: ApplicationFiled: July 25, 2019Publication date: January 30, 2020Inventors: Zhifeng Chen, Macduff Richard Hughes, Yonghui Wu, Michael Schuster, Xu Chen, Llion Owen Jones, Niki J. Parmar, George Foster, Orhan Firat, Ankur Bapna, Wolfgang Macherey, Melvin Jose Johnson Premkumar
-
Publication number: 20200034435Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. One of the systems includes an encoder neural network comprising: an input forward long short-term memory (LSTM) layer configured to process each input token in the input sequence in a forward order to generate a respective forward representation of each input token, an input backward LSTM layer configured to process each input token in a backward order to generate a respective backward representation of each input token and a plurality of hidden LSTM layers configured to process a respective combined representation of each of the input tokens in the forward order to generate a respective encoded representation of each of the input tokens; and a decoder subsystem configured to receive the respective encoded representations and to process the encoded representations to generate an output sequence.Type: ApplicationFiled: September 25, 2017Publication date: January 30, 2020Inventors: Mohammad Norouzi, Zhifeng Chen, Yonghui Wu, Michael Schuster, Quoc V. Le
-
Publication number: 20200027444Abstract: Methods, systems, and apparatus, including computer-readable media, for performing speech recognition using sequence-to-sequence models. An automated speech recognition (ASR) system receives audio data for an utterance and provides features indicative of acoustic characteristics of the utterance as input to an encoder. The system processes an output of the encoder using an attender to generate a context vector and generates speech recognition scores using the context vector and a decoder trained using a training process that selects at least one input to the decoder with a predetermined probability. An input to the decoder during training is selected between input data based on a known value for an element in a training example, and input data based on an output of the decoder for the element in the training example. A transcription is generated for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.Type: ApplicationFiled: July 19, 2019Publication date: January 23, 2020Inventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-Cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A.U. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
-
Publication number: 20200002407Abstract: The present invention relates to monoclonal antibodies which have high anti-RSV neutralizing titers. The invention further provides for isolated nucleic acids encoding the antibodies of the invention and host cells transformed therewith. The invention yet further provides for diagnostic, prophylactic and therapeutic methods employing the antibodies and nucleic acids of the invention, particularly as a passive immunotherapy agent in infants and the elderly.Type: ApplicationFiled: June 7, 2019Publication date: January 2, 2020Applicant: Merck Sharp & Dohme Corp.Inventors: KALPIT A. VORA, KARA S. COX, AIMIN TANG, ZHIFENG CHEN, DANIEL DISTEFANO, LAN ZHANG, HUA-POO SU
-
Publication number: 20190311708Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.Type: ApplicationFiled: June 20, 2019Publication date: October 10, 2019Inventors: Samy Bengio, Yuxuan Wang, Zongheng Yang, Zhifeng Chen, Yonghui Wu, Ioannis Agiomyrgiannakis, Ron J. Weiss, Navdeep Jaitly, Ryan M. Rifkin, Robert Andrew James Clark, Quoc V. Le, Russell J. Ryan, Ying Xiao
-
Publication number: 20190258961Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model. An exemplary system applying implicit bridging for machine learning tasks, as described in this specification, trains a machine learning model to perform certain types of machine learning tasks without requiring explicit training data for the certain types of machine learning tasks to be used during training.Type: ApplicationFiled: May 3, 2019Publication date: August 22, 2019Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Patent number: 10358480Abstract: The present invention relates to monoclonal antibodies which have high anti-RSV neutralizing titers. The invention further provides for isolated nucleic acids encoding the antibodies of the invention and host cells transformed therewith. The invention yet further provides for diagnostic, prophylactic and therapeutic methods employing the antibodies and nucleic acids of the invention, particularly as a passive immunotherapy agent in infants and the elderly.Type: GrantFiled: March 22, 2018Date of Patent: July 23, 2019Assignee: Merck Sharp & Dohme Corp.Inventors: Kalpit A. Vora, Kara S. Cox, Aimin Tang, Zhifeng Chen, Daniel DiStefano, Lan Zhang, Hua-Poo Su
-
Patent number: 10348559Abstract: A method for creating a port group, includes generating, by a first software defined network (SDN) controller, the specified identifier according to a device identifier of a first forwarding device in a preset path, a device identifier of a last forwarding device in the preset path, and a device identifier of a specified forwarding device in the preset path; and sending, by the first SDN controller, the specified identifier to a second SDN controller, so that the second SDN controller creates, on the specified forwarding device, a specified port group corresponding to the specified identifier. Therefore, the first SDN controller needs to interact with the second SDN controller only once, to create a port group on the specified forwarding device that is directly controlled by the second SDN controller, and thus the port group creation process is simple.Type: GrantFiled: May 18, 2017Date of Patent: July 9, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Zhifeng Chen