Patents by Inventor Hongbin Zheng
Hongbin Zheng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12198041Abstract: Generating instructions for programming a processing element array to implement a convolution operation can include determining that the convolution operation under-utilizes the processing element array. The convolution operation involves using the processing element array to perform a series of matrix multiplications between a set of filters and a set of input matrices. Each filter comprises a weight matrix. Each input matrix is assigned to a respective row in the processing element array. Under-utilization can be determined through detecting that less than a threshold number of rows would be used concurrently. In response to determining that the convolution operation under-utilizes the processing element array, instructions can be added for modifying the convolution operation to increase the number of rows used concurrently. The added instructions are executable to cause at least one input matrix to be processed in parallel across more rows compared to processing without modifying the convolution operation.Type: GrantFiled: July 14, 2023Date of Patent: January 14, 2025Assignee: Amazon Technologies, Inc.Inventors: Jeffrey T. Huynh, Ron Diamant, Hongbin Zheng, Yizhi Liu, Animesh Jain, Yida Wang, Vinod Sharma, Richard John Heaton, Randy Renfu Huang, Sundeep Amirineni, Drazen Borkovic
-
Patent number: 12182688Abstract: Methods and apparatuses for hierarchical partitioning of operators of a neural network for execution on an acceleration engine are provided. Neural networks are built in machine learning frameworks using neural network operators. The neural network operators are compiled into executable code for the acceleration engine. Development of new framework-level operators can exceed the capability to map the newly developed framework-level operators onto the acceleration engine. To enable neural networks to be executed on an acceleration engine, hierarchical partitioning can be used to partition the operators of the neural network. The hierarchical partitioning can identify operators that are supported by a compiler for execution on the acceleration engine, operators to be compiled for execution on a host processor, and operators to be executed on the machine learning framework.Type: GrantFiled: November 27, 2019Date of Patent: December 31, 2024Assignee: Amazon Technologies, Inc.Inventors: Animesh Jain, Yizhi Liu, Hongbin Zheng, Jeffrey T. Huynh, Haichen Li, Drazen Borkovic, Jindrich Zejda, Richard John Heaton, Randy Renfu Huang, Zhi Chen, Yida Wang
-
Patent number: 12175222Abstract: A computer-implemented method includes generating, based on a representation of a tensor mapping between an input tensor and an output tensor, a list of mappings from elements of the input tensor to elements of the output tensor, and generating groups of mappings from the list of mappings, where each of the groups of mappings corresponds to a respective set of matrix multiplications, a matrix transpose, or both. The computer-implemented method also includes generating a respective expression for each of the groups of mappings and generating code for summing results of the respective expressions, where each respective expression includes the respective set of matrix multiplications, the matrix transpose, or both.Type: GrantFiled: November 20, 2020Date of Patent: December 24, 2024Assignee: Amazon Technologies, Inc.Inventors: Michael Ray Benfield, Hongbin Zheng, Thomas Robert Norell
-
Patent number: 12159218Abstract: A single instruction multiple data (SIMD) processor is used to implement a dropout layer between a first layer and a second layer of a neural network. The SIMD processor can implement the dropout layer by setting one or more elements in an output tensor of the first layer to zero before providing it as an input tensor to the second layer. Setting of the one or more elements to zero is based on a dropout rate, and pseudo-random numbers generated by a random number generator in the SIMD processor.Type: GrantFiled: July 27, 2020Date of Patent: December 3, 2024Assignee: Amazon Technologies, Inc.Inventors: Jiading Gai, Hongbin Zheng, Animesh Jain, Randy Renfu Huang, Vignesh Vivekraja
-
Patent number: 12148894Abstract: A battery module includes: a battery cell component, including at least two battery cells arranged in a preset direction; a conductive element, wherein adjacent battery cells are electrically connected through the conductive element; and a protection circuit board, wherein the battery cells located at two ends of the battery cell component are electrically connected to the protection circuit board through corresponding tabs, respectively.Type: GrantFiled: May 26, 2021Date of Patent: November 19, 2024Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventors: Liangliang Xu, Xuewen Wei, Hongbin Zheng, Zeng Gao
-
Patent number: 12087971Abstract: The present disclosure relates to a power supply assembly and a method for manufacturing the same. The power supply assembly includes a negative electrode sheet, a separator, and a positive electrode sheet. The negative electrode sheet includes a negative tab, and the positive electrode sheet includes a positive tab, a positive electrode material covering a first foil, a first recess and a second recess. The first recess is formed by removing a positive electrode material covering a first region of the first foil and configured for receiving the positive tab, and a size of the first recess is larger than a size of the positive tab. The second recess is formed by removing a positive electrode material covering a second region of the first foil and configured for receiving the negative tab, and a size of the second recess is larger than a size of the negative tab.Type: GrantFiled: May 31, 2021Date of Patent: September 10, 2024Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventors: Longfei Du, Hongbin Zheng, Xuewen Wei, Zongqiang Wang
-
Patent number: 12079734Abstract: Techniques for reducing a compilation time for compiling a neural network are disclosed. A description of a neural network is received by a compiler. A plurality of operators are identified based on the description of the neural network. A plurality of subgraphs are formed, each including one or more operators. For each subgraph, a performance factor is calculated based on a compute usage and a memory usage associated with the operators included in the subgraph. The performance factor is compared to a threshold. Based on the comparison, either the subgraph is classified as a compute bound subgraph and a set of memory optimizations are suppressed or the subgraph is classified as a memory bound subgraph and a set of compute optimizations are suppressed.Type: GrantFiled: August 1, 2022Date of Patent: September 3, 2024Assignee: Amazon Technologies, Inc.Inventors: Hongbin Zheng, Randy Renfu Huang, Richard John Heaton
-
Patent number: 12045611Abstract: In one example, a method comprises: receiving input codes, wherein the input codes represent a computational dataflow graph; traversing the computational dataflow graph to identify single-entry-single-exit (SESE) subgraphs of the computational dataflow graph, wherein each SESE subgraph has a sequence of nodes comprising a root node and a child node and representing a sequence of element-wise operators, wherein the root node receives a single input tensor, and wherein the child node outputs a single output tensor; determining a merged operator for each SESE subgraph; and generating executable instructions for the computational dataflow graph to be executed by a hardware accelerator having a first execution unit and a second execution unit, wherein the executable instructions comprise first executable instructions for the merged operators targeted at the first execution unit, and second executable instructions for other operators of the computational dataflow graph targeted at the second execution unit.Type: GrantFiled: August 7, 2023Date of Patent: July 23, 2024Assignee: Amazon Technologies, Inc.Inventors: Ron Diamant, Hongbin Zheng, Drazen Borkovic, Haichen Li
-
Patent number: 11941383Abstract: Techniques to speed up code compilation may include caching code analysis results such that the analysis of subsequent code having a similar structured can be omitted. For example, a loop-nest construct in the code can be parsed, and an execution statement in the loop-nest construct can be analyzed by a compiler to generate an analysis result indicating a set of execution conditions for the execution statement. A lookup key can be generated from the control statements bounding the execution statement, and the analysis result can be stored with the lookup key in a cache entry of the cache. The execution statement is then modified according to the analysis result for optimization. Instead of having to analyze a subsequent execution statement bounded by the same control statements, the analysis result of the subsequent execution statement can be retrieved from the cache and be used to modify the subsequent execution statement.Type: GrantFiled: March 8, 2022Date of Patent: March 26, 2024Assignee: Amazon Technologies, Inc.Inventors: Hongbin Zheng, Pushkar Ratnalikar
-
Publication number: 20230359876Abstract: Generating instructions for programming a processing element array to implement a convolution operation can include determining that the convolution operation under-utilizes the processing element array. The convolution operation involves using the processing element array to perform a series of matrix multiplications between a set of filters and a set of input matrices. Each filter comprises a weight matrix. Each input matrix is assigned to a respective row in the processing element array. Under-utilization can be determined through detecting that less than a threshold number of rows would be used concurrently. In response to determining that the convolution operation under-utilizes the processing element array, instructions can be added for modifying the convolution operation to increase the number of rows used concurrently. The added instructions are executable to cause at least one input matrix to be processed in parallel across more rows compared to processing without modifying the convolution operation.Type: ApplicationFiled: July 14, 2023Publication date: November 9, 2023Inventors: Jeffrey T. Huynh, Ron Diamant, Hongbin Zheng, Yizhi Liu, Animesh Jain, Yida Wang, Vinod Sharma, Richard John Heaton, Randy Renfu Huang, Sundeep Amirineni, Drazen Borkovic
-
Patent number: 11809849Abstract: In one example, a method performed by a compiler comprises: receiving a dataflow graph of a neural network, the neural network comprising a neural network operator; receiving information of computation resources and memory resources of a neural network hardware accelerator intended to execute the neural network operator; determining, based on the dataflow graph, iterations of an operation on elements of a tensor included in the neural network operator; determining, based on the information, a mapping between the elements of the tensor to addresses in the portion of the local memory, and a number of the iterations of the operation to be included in a batch, wherein the number of the iterations in the batch are to be executed in parallel by the neural network hardware accelerator; and generating a schedule of execution of the batches of the iterations of the operations.Type: GrantFiled: May 20, 2021Date of Patent: November 7, 2023Assignee: Amazon Technologies, Inc.Inventors: Hongbin Zheng, Randy Renfu Huang, Robert Geva
-
Publication number: 20230348234Abstract: The present invention discloses a pressing and stretching structure and a clamping structure of a container spreader, and belongs to the technical field of crane accessories. The pressing and stretching structure comprises a telescopic rod, a rotating mechanism and a guiding cylinder which is arranged vertically; a radially telescopic pin is arranged in the guiding cylinder and stoppers are arranged at a lower end; a first annular lug boss and a second annular lug boss are arranged at an interval from top to bottom on an outer wall of the telescopic rod; a sliding bush is sleeved between the first lug boss and the second lug boss; and the rotating mechanism is used for making the telescopic rod and the guiding cylinder to move relatively so that the limiting rod on the telescopic rod reaches the positions of the lower end surface and the upper end surface of the stoppers.Type: ApplicationFiled: August 12, 2022Publication date: November 2, 2023Inventors: Zeqiang ZHANG, Yanqing ZENG, Hongbin ZHENG, Silu LIU, Tengfei WU, Wenming CHENG
-
Patent number: 11782706Abstract: In one example, a method comprises: receiving input codes, wherein the input codes represent a computational dataflow graph; traversing the computational dataflow graph to identify single-entry-single-exit (SESE) subgraphs of the computational dataflow graph, wherein each SESE subgraph has a sequence of nodes comprising a root node and a child node and representing a sequence of element-wise operators, wherein the root node receives a single input tensor, and wherein the child node outputs a single output tensor; determining a merged operator for each SESE subgraph; and generating executable instructions for the computational dataflow graph to be executed by a hardware accelerator having a first execution unit and a second execution unit, wherein the executable instructions comprise first executable instructions for the merged operators targeted at the first execution unit, and second executable instructions for other operators of the computational dataflow graph targeted at the second execution unit.Type: GrantFiled: June 29, 2021Date of Patent: October 10, 2023Assignee: Amazon Technologies, Inc.Inventors: Ron Diamant, Hongbin Zheng, Drazen Borkovic, Haichen Li
-
Patent number: 11741350Abstract: A computer-implemented method includes receiving a neural network model for implementation using a processing element array, where the neural network model includes a convolution operation on a set of input feature maps and a set of filters. The method also includes determining, based on the neural network model, that the convolution operation utilizes less than a threshold number of rows in the processing element array for applying a set of filter elements to the set of input feature maps, where the set of filter elements includes one filter element in each filter of the set of filters. The method further includes generating, for the convolution operation and based on the neural network model, a first instruction and a second instruction for execution by respective rows in the processing element array, where the first instruction and the second instruction use different filter elements of a filter in the set of filters.Type: GrantFiled: November 27, 2019Date of Patent: August 29, 2023Assignee: Amazon Technologies, Inc.Inventors: Jeffrey T. Huynh, Ron Diamant, Hongbin Zheng, Yizhi Liu, Animesh Jain, Yida Wang, Vinod Sharma, Richard John Heaton, Randy Renfu Huang, Sundeep Amirineni, Drazen Borkovic
-
Publication number: 20230165214Abstract: A pet safety bag for a vehicle includes an extendable enclosure and a supporting frame structure. The extendable enclosure includes an inner space defined by a bottom wall and side walls extending upwardly from the bottom wall along its edges to define bottom joint edge, and terminating at a top edge, wherein side walls are joined along respective edges thereof to define side joint edges. Further, the supporting frame structure may be hingedably coupled along the top edge, side joint edges, and the bottom joint edges to provide support to the extendable enclosure and prevent deformation when a pet lay along the top edge of the extendable enclosure.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventor: Hongbin Zheng
-
Patent number: 11498773Abstract: A material layered conveying device based on a disassembly line, including a front section conveyor belt, a rear section conveyor belt, and a lower conveyor belt. An interval between the front and rear section conveyor belts is provided with two sets of movable conveyor belts arranged in side by side parallel, a lifting conveyor belt is provided below the interval and adjacent to and horizontal with the lower conveyor belt, and a lifting conveyor belt elevator is provided under the lifting conveyor belt. The lifting conveyor belt is driven vertically up, and the lifting conveyor belt is horizontal with the front and rear section conveyor belts when the lifting conveyor belt is moved to a maximum distance. The device adopts the movable dual conveyor belts of the respective levels of the upper and lower conveyor belts, and can achieve synchronous high-low transferring of the material when the material is conveyed.Type: GrantFiled: July 19, 2022Date of Patent: November 15, 2022Assignee: Southwest Jiaotong UniversityInventors: Zeqiang Zhang, Hongbin Zheng, Yanqing Zeng, Silu Liu, Dan Ji, Xiaoyue Fang
-
Patent number: 11494321Abstract: A computer-implemented method includes identifying, from instruction code for executing by a computing system to implement a neural network, a first instruction for allocating a first region of a local memory of an accelerator of the computing system to a tensor, and a first direct memory access (DMA) load instruction for loading the tensor from a location of a system memory of the computing system to a second region of the local memory; adding a first tensor copy instruction in the instruction code to save the tensor in the first region of the local memory to a third region of the local memory that has dimensions different from dimensions of the first region; and replacing the first DMA load instruction with a second tensor copy instruction for saving data in the third region of the local memory to the second region of the local memory.Type: GrantFiled: September 30, 2021Date of Patent: November 8, 2022Assignee: Amazon Technologies, Inc.Inventors: Yunxuan Yu, Hongbin Zheng, Qingrui Liu
-
Patent number: 11461662Abstract: Techniques for reducing a compilation time for compiling a neural network are disclosed. A description of a neural network is received by a compiler. A plurality of operators are identified based on the description of the neural network. A plurality of subgraphs are formed, each including one or more operators. For each subgraph, a performance factor is calculated based on a compute usage and a memory usage associated with the operators included in the subgraph. The performance factor is compared to a threshold. Based on the comparison, either the subgraph is classified as a compute bound subgraph and a set of memory optimizations are suppressed or the subgraph is classified as a memory bound subgraph and a set of compute optimizations are suppressed.Type: GrantFiled: March 25, 2020Date of Patent: October 4, 2022Assignee: Amazon Technologies, Inc.Inventors: Hongbin Zheng, Randy Renfu Huang, Richard John Heaton
-
Publication number: 20220209306Abstract: A battery module includes: a battery cell component, including at least two battery cells arranged in a preset direction; a conductive element, wherein adjacent battery cells are electrically connected through the conductive element; and a protection circuit board, wherein the battery cells located at two ends of the battery cell component are electrically connected to the protection circuit board through corresponding tabs, respectively.Type: ApplicationFiled: May 26, 2021Publication date: June 30, 2022Applicant: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventors: Liangliang XU, Xuewen WEI, Hongbin ZHENG, Zeng GAO
-
Patent number: D1057302Type: GrantFiled: May 21, 2024Date of Patent: January 7, 2025Inventor: Hongbin Zheng