Patents by Inventor Michael A. Schuster
Michael A. Schuster has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11113480Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. One of the systems includes an encoder neural network comprising: an input forward long short-term memory (LSTM) layer configured to process each input token in the input sequence in a forward order to generate a respective forward representation of each input token, an input backward LSTM layer configured to process each input token in a backward order to generate a respective backward representation of each input token and a plurality of hidden LSTM layers configured to process a respective combined representation of each of the input tokens in the forward order to generate a respective encoded representation of each of the input tokens; and a decoder subsystem configured to receive the respective encoded representations and to process the encoded representations to generate an output sequence.Type: GrantFiled: September 25, 2017Date of Patent: September 7, 2021Assignee: Google LLCInventors: Mohammad Norouzi, Zhifeng Chen, Yonghui Wu, Michael Schuster, Quoc V. Le
-
Patent number: 11041070Abstract: Polymer compositions comprising styrene butadiene block-copolymers (SBC) can be used for shrink-wrap films, a stiff star-shaped SBC block copolymer A having two short branches of a single copolymer block (B/S)Ai and two long branches of the structure St-[(B/S)A]n-(B/S)Ai or [(B/S)A]n-(B/S)Ai are used in said polymer composition, and the production of shrink-wrap films and of multilayer films is described.Type: GrantFiled: December 14, 2017Date of Patent: June 22, 2021Assignee: INEOS STYROLUCTION GROUP GMBHInventors: Norbert Niessner, Michael Schuster, Daniel Wagner, Konrad Knoll
-
Patent number: 10971170Abstract: Methods, systems, and computer program products for generating, from an input character sequence, an output sequence of audio data representing the input character sequence. The output sequence of audio data includes a respective audio output sample for each of a number of time steps. One example method includes, for each of the time steps: generating a mel-frequency spectrogram for the time step by processing a representation of a respective portion of the input character sequence using a decoder neural network; generating a probability distribution over a plurality of possible audio output samples for the time step by processing the mel-frequency spectrogram for the time step using a vocoder neural network; and selecting the audio output sample for the time step from the possible audio output samples in accordance with the probability distribution.Type: GrantFiled: August 8, 2018Date of Patent: April 6, 2021Assignee: Google LLCInventors: Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Michael Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Russell John Wyatt Skerry-Ryan, Ryan M. Rifkin, Ioannis Agiomyrgiannakis
-
Patent number: 10940405Abstract: The invention relates to a filter element for filtering a fluid passing through the filter element with a group of first channels, in which each first channel extends from a first end to a second end and each first channel has on its first end an inlet opening through which the fluid to be filtered can flow into the respective first channel and is closed at its second end, and a group of second channels, in which each second channel extends from a first end to a second end and each second channel has at its second end an outlet opening through which the filtered fluid can flow out of the respective second channel and is closed at its first end, wherein at least one first channel is arranged adjacent to a second channel and the first channel is separated from the second channel by a partition wall, wherein the partition wall is formed of a filter medium through which the fluid to be filtered can flow from the first channel into the second channel, wherein the filter medium is a coalescence filter medium.Type: GrantFiled: April 7, 2017Date of Patent: March 9, 2021Assignee: DONALDSON FILTRATION DEUTSCHLAND GMBHInventors: Hans-Michael Schuster, Abdelkhalic Rbayti
-
Publication number: 20200410396Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.Type: ApplicationFiled: July 13, 2020Publication date: December 31, 2020Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Patent number: 10848861Abstract: A loudspeaker assembly for a vehicle includes an enclosure arranged to be positioned outside of a passenger compartment of the vehicle adjacent a first vehicle body panel. The enclosure has a front side, a rear side, a top portion and a substantially enclosed bottom portion, where the top portion includes an outer wall structure. A loudspeaker is mounted in the enclosure and configured to generate an acoustic signal to be radiated into the passenger compartment through an opening in a second vehicle body panel. The enclosure may include a plurality of ribs extending between the loudspeaker and the outer wall structure. A barrier may be disposed within the top portion inward of the outer wall structure and substantially surrounding the loudspeaker. The ribs and the barrier damp high frequencies to reduce propagation of high-frequency noise toward the passenger compartment.Type: GrantFiled: April 16, 2019Date of Patent: November 24, 2020Assignee: Harman Becker Automotive Systems GmbHInventors: Joerg Prokisch, Michael Schuster, Andreas Pfeffer
-
Publication number: 20200336825Abstract: A loudspeaker assembly for a vehicle includes an enclosure arranged to be positioned outside of a passenger compartment of the vehicle adjacent a first vehicle body panel. The enclosure has a front side, a rear side, a top portion and a substantially enclosed bottom portion, where the top portion includes an outer wall structure. A loudspeaker is mounted in the enclosure and configured to generate an acoustic signal to be radiated into the passenger compartment through an opening in a second vehicle body panel. The enclosure may include a plurality of ribs extending between the loudspeaker and the outer wall structure. A barrier may be disposed within the top portion inward of the outer wall structure and substantially surrounding the loudspeaker. The ribs and the barrier damp high frequencies to reduce propagation of high-frequency noise toward the passenger compartment.Type: ApplicationFiled: April 16, 2019Publication date: October 22, 2020Inventors: Joerg PROKISCH, Michael SCHUSTER, Andreas PFEFFER
-
Publication number: 20200329301Abstract: A loudspeaker arrangement having a first loudspeaker comprising a first sound radiating surface and a first loudspeaker basket, and a second loudspeaker comprising a second sound radiating surface and a second loudspeaker basket. The first loudspeaker and the second loudspeaker are arranged opposite each other in a first direction, a cavity is formed between a front side of the first loudspeaker and a front side of the second loudspeaker, and the first loudspeaker basket is directly coupled to the second loudspeaker basket.Type: ApplicationFiled: March 26, 2020Publication date: October 15, 2020Applicant: Harman Becker Automotive Systems GmbHInventors: Joerg Prokisch, Michael Schuster, Andreas Pfeffer, Manfred Aigner
-
Publication number: 20200222858Abstract: The present disclosure relates to a microporous hollow fiber filter membrane having a large inner diameter and a thin wall. The fiber can be used for sterile filtration of liquids or removal of particles from liquids. The disclosure further relates to a method for producing the membrane and a filter device comprising the membrane.Type: ApplicationFiled: July 18, 2018Publication date: July 16, 2020Inventors: Ralf MENDA, Evelyn HARTMANN, Carina ZWEIGART, Bernd BAUER, Michael SCHUSTER
-
Patent number: 10713593Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.Type: GrantFiled: December 29, 2016Date of Patent: July 14, 2020Assignee: Google LLCInventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Patent number: 10679148Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model. An exemplary system applying implicit bridging for machine learning tasks, as described in this specification, trains a machine learning model to perform certain types of machine learning tasks without requiring explicit training data for the certain types of machine learning tasks to be used during training.Type: GrantFiled: May 3, 2019Date of Patent: June 9, 2020Assignee: Google LLCInventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Patent number: 10635977Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing multi-task learning. In one method a system obtains a respective set of training data for each of multiple machine learning tasks. For each of the machine learning tasks, the system configures a respective teacher machine learning model to perform the machine learning task by training the teacher machine learning model on the training data. The system trains a single student machine learning model to perform the multiple machine learning tasks using (i) the configured teacher machine learning models, and (ii) the obtained training data.Type: GrantFiled: July 1, 2019Date of Patent: April 28, 2020Assignee: Google LLCInventors: Junyoung Chung, Melvin Jose Johnson Premkumar, Michael Schuster, Wolfgang Macherey
-
Publication number: 20200095416Abstract: Polymer compositions comprising styrene butadiene block-copolymers (SBC) can be used for shrink-wrap films, a stiff star-shaped SBC block copolymer A having two short branches of a single copolymer block (B/S)Ai and two long branches of the structure St-[(B/S)A]n-(B/S)Ai or [(B/S)A]n-(B/S)Ai are used in said polymer composition, and the production of shrink-wrap films and of multilayer films is described.Type: ApplicationFiled: December 14, 2017Publication date: March 26, 2020Inventors: Norbert NIESSNER, Michael SCHUSTER, Daniel WAGNER, Konrad KNOLL
-
Publication number: 20200072551Abstract: A drying system having a container, an air system, a sensor, and a control system. The control system is programmed to receive a measurement from the sensor and generate an air system instruction based on the measurement. The air system instruction corresponds with at least one of the temperature, flow rate, and/or pressure of airflow directed to the container by the air system. The air system adjusts the temperature, flow rate, and/or pressure of the airflow based on the air system instruction generated by the control system. A process for drying an agricultural product. A drying system having a floor with apertures that are sized and spaced so that a volumetric flow rate of the airflow through the apertures is between 75% to 100% of a maximum volumetric flow rate of the air system.Type: ApplicationFiled: August 27, 2019Publication date: March 5, 2020Inventors: JAMES BOIRE, TYLER CALOW, JASON GREEN, TYLER JAMES JOHNSON, MATTHEW JOHNSTON, JAYSON KOROLL, FRANK MONSMAN, MYLES NEMETCHEK, LEON PRATCHLER, MICHAEL SCHUSTER, JOHN WARNER
-
Publication number: 20200034436Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.Type: ApplicationFiled: July 25, 2019Publication date: January 30, 2020Inventors: Zhifeng Chen, Macduff Richard Hughes, Yonghui Wu, Michael Schuster, Xu Chen, Llion Owen Jones, Niki J. Parmar, George Foster, Orhan Firat, Ankur Bapna, Wolfgang Macherey, Melvin Jose Johnson Premkumar
-
Publication number: 20200034435Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. One of the systems includes an encoder neural network comprising: an input forward long short-term memory (LSTM) layer configured to process each input token in the input sequence in a forward order to generate a respective forward representation of each input token, an input backward LSTM layer configured to process each input token in a backward order to generate a respective backward representation of each input token and a plurality of hidden LSTM layers configured to process a respective combined representation of each of the input tokens in the forward order to generate a respective encoded representation of each of the input tokens; and a decoder subsystem configured to receive the respective encoded representations and to process the encoded representations to generate an output sequence.Type: ApplicationFiled: September 25, 2017Publication date: January 30, 2020Inventors: Mohammad Norouzi, Zhifeng Chen, Yonghui Wu, Michael Schuster, Quoc V. Le
-
Publication number: 20190325308Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing multi-task learning. In one method a system obtains a respective set of training data for each of multiple machine learning tasks. For each of the machine learning tasks, the system configures a respective teacher machine learning model to perform the machine learning task by training the teacher machine learning model on the training data. The system trains a single student machine learning model to perform the multiple machine learning tasks using (i) the configured teacher machine learning models, and (ii) the obtained training data.Type: ApplicationFiled: July 1, 2019Publication date: October 24, 2019Inventors: Junyoung Chung, Melvin Jose Johnson Premkumar, Michael Schuster, Wolfgang Macherey
-
Publication number: 20190258961Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model. An exemplary system applying implicit bridging for machine learning tasks, as described in this specification, trains a machine learning model to perform certain types of machine learning tasks without requiring explicit training data for the certain types of machine learning tasks to be used during training.Type: ApplicationFiled: May 3, 2019Publication date: August 22, 2019Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
-
Publication number: 20190188566Abstract: A method includes obtaining data identifying a machine learning model to be trained to perform a machine learning task, the machine learning model being configured to receive an input example and to process the input example in accordance with current values of a plurality of model parameters to generate a model output for the input example; obtaining initial training data for training the machine learning model, the initial training data comprising a plurality of training examples and, for each training example, a ground truth output that should be generated by the machine learning model by processing the training example; generating modified training data from the initial training data; and training the machine learning model on the modified training data.Type: ApplicationFiled: August 25, 2017Publication date: June 20, 2019Inventors: Michael Schuster, Samuel Bengio, Navdeep Jaitly, Zhifeng Chen, Dale Eric Schuurmans, Mohammad Norouzi, Yonghui Wu
-
Publication number: 20190151775Abstract: The invention relates to a filter element for filtering a fluid passing through the filter element with a group of first channels, in which each first channel extends from a first end to a second end and each first channel has on its first end an inlet opening through which the fluid to be filtered can flow into the respective first channel and is closed at its second end, and a group of second channels, in which each second channel extends from a first end to a second end and each second channel has at its second end an outlet opening through which the filtered fluid can flow out of the respective second channel and is closed at its first end, wherein at least one first channel is arranged adjacent to a second channel and the first channel is separated from the second channel by a partition wall, wherein the partition wall is formed of a filter medium through which the fluid to be filtered can flow from the first channel into the second channel, wherein the filter medium is a coalescence filter medium.Type: ApplicationFiled: April 7, 2017Publication date: May 23, 2019Inventors: Hans-Michael SCHUSTER, Abdelkhalic RBAYTI