Patents by Inventor Dehao Chen
Dehao Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11964220Abstract: The present invention provides a hydrophilic/oleophobic sponge, a preparation method and use thereof, and belongs to the technical field of functional material preparation. In the present invention, a modified solution is obtained by mixing a nanoparticle suspension with a modifier solution; the nanoparticle suspension includes silica-encapsulated Fe3O4 nanoparticle suspension and/or nano-silica ethanol suspension; the modifier solution includes chitosan-acetic acid aqueous solution and polyvinyl alcohol (PVA) aqueous solution. The sponge is soaked in the modified solution, mixed and crosslinked with glutaraldehyde aqueous solution to obtain the hydrophilic/oleophobic sponge, conferring good oil-water separation ability on the sponge. The sponge effectively separates a heavy water layer from oil-water mixtures with such light oils as lubricating oil, engine oil, pump oil, crude oil, gasoline, and sunflower seed oil in a simple gravity-driven manner.Type: GrantFiled: June 18, 2019Date of Patent: April 23, 2024Assignee: Guangdong University of Petrochemical TechnologyInventors: Fu'an He, Wenxu He, Bo Lin, Dehao Li, Zengtian Li, Wanyi Chen
-
Publication number: 20240118875Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for feedback-directed optimization. One of the methods includes maintaining a data store comprising a plurality of optimization profiles that are used by a compiler to compile respective computer programs. The computer programs can be invoked by a set of executing workloads. Operations are repeatedly performed that include, for each optimization profile in at least a subset of the optimization profiles: determining or predicting whether the optimization profile is a valid optimization profile for a current software version of the compiler, and in response to determining or predicting that the optimization profile is not a valid optimization profile for the current software version of the compiler, removing the optimization profile from the data store.Type: ApplicationFiled: October 6, 2023Publication date: April 11, 2024Inventors: Yu Wang, Dehao Chen, Phitchaya Mangpo Phothilimthana
-
Publication number: 20240076805Abstract: A cam-driven wire mesh-type electrospinning apparatus and a use method therefor are provided. The cam-driven wire mesh-type electrospinning apparatus includes the following components: a high-voltage (HV) power supply unit, a wire mesh-type spinneret, a fiber receiving device, a cam unit, and a solution supply unit. The positions and connection relationships of all the components are as follows: any end of the wire mesh-type spinneret is connected to the HV power supply unit, and the other end of the wire mesh-type spinneret is connected to the cam unit; the wire mesh-type spinneret is installed on the solution supply unit; and the fiber receiving device is positioned right above the wire mesh-type spinneret. The cam-driven wire mesh-type electrospinning apparatus may induce a solution to be uniformly distributed on a wire mesh plane, and adjust the density of jet flow and the distribution position of the jet flow.Type: ApplicationFiled: August 15, 2023Publication date: March 7, 2024Applicant: Guangdong University of Petrochemical TechnologyInventors: Xiaoqing CHEN, Jiahao LIANG, Wenyu XIE, Min HUANG, Yebin CAI, Changgang LI, Dehao LI
-
Publication number: 20230222318Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing machine learning task on a network input to generate a network output. In one aspect, one of the systems includes an attention neural network configured to perform the machine learning task, the attention neural network including one or more attention layers, each attention layer comprising an attention sub-layer and a feed-forward sub-layer. Some or all of the attention layers have a feed-forward sub-layer that applies conditional computation to the inputs to the sub-layer.Type: ApplicationFiled: June 30, 2021Publication date: July 13, 2023Inventors: Dmitry Lepikhin, Yanping Huang, Orhan Firat, Maxim Krikun, Dehao Chen, Noam M. Shazeer, HyoukJoong Lee, Yuanzhong Xu, Zhifeng Chen
-
Publication number: 20220121945Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.Type: ApplicationFiled: January 3, 2022Publication date: April 21, 2022Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
-
Patent number: 11232356Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.Type: GrantFiled: August 10, 2020Date of Patent: January 25, 2022Assignee: Google LLCInventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
-
Publication number: 20210042620Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.Type: ApplicationFiled: August 10, 2020Publication date: February 11, 2021Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
-
Patent number: 9348566Abstract: A system and method for improving the performance of all applications are disclosed. Production profile data may be collected about each application while the application is executing. The production profile data may be converted into symbolized profiles and stored in a database. The symbolized profiles may be aggregated into a single aggregated profile. This aggregated profile may be used as a compilation input when compiling new versions of an application's binary to improve the application's performance for observed application behavior.Type: GrantFiled: January 2, 2014Date of Patent: May 24, 2016Assignee: GOOGLE INC.Inventors: Tipp Moseley, Dehao Chen, Xinliang David Li
-
Patent number: 9009691Abstract: A system and method for using inline stacks to improve the performance of application binaries is included. While executing a first application binary, profile data may be collected about the application that includes which callee functions are called from the application's callsites and the number of times each inline stack is executed. A context summary map may be created from the collected profile data which shows a summary of the total execution count of all instructions in the callee function for each callsite inlined in the application's normal binary. Using the context summary map, each function callsite's execution count may be compared with a predetermined threshold to determine if the function should be inlined. Then the application's profile may be annotated and a second application binary, an optimized binary, may be generated using the annotated profile.Type: GrantFiled: July 12, 2013Date of Patent: April 14, 2015Assignee: Google Inc.Inventors: Dehao Chen, Xinliang David Li
-
Patent number: 8423980Abstract: While optimizing executable code, compilers traditionally make static determinations about whether or not to inline functions. Embodiments of the invention convert dynamic hardware-event sampling information into context-specific edge frequencies, which can be used to make inlining decisions for functions.Type: GrantFiled: December 30, 2008Date of Patent: April 16, 2013Assignee: Google Inc.Inventors: Vinodha Ramasamy, Dehao Chen, Peng Yuan
-
Patent number: 8387026Abstract: Traditional feedback-directed optimization (FDO) is not widely used due to the significant computational overhead involved in using instrumented binaries. The described embodiments provide methods that eliminate the need for instrumented binaries by permitting the conversion of hardware-event sampling information into edge frequencies usable by FDO compilers. Some advantages include: the ability to collect feedback data on production systems; the ability to perform FDO on the OS kernel; and the ability to avoid disrupting timing paths from instrumented binaries.Type: GrantFiled: December 24, 2008Date of Patent: February 26, 2013Assignee: Google Inc.Inventors: Robert Hundt, Vinodha Ramasamy, Dehao Chen