Patents by Inventor Nangeng ZHANG
Nangeng ZHANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12026105Abstract: An image recognition processing method and apparatus.Type: GrantFiled: July 10, 2019Date of Patent: July 2, 2024Assignee: CANAAN BRIGHT SIGHT CO., LTDInventors: Minli Liu, Nangeng Zhang
-
Publication number: 20240185042Abstract: In one aspect, an operation method based on a neural network: calculating, according to the sizes of a convolution kernel and an original image, a total number of operation cycles and an image matrix corresponding to each operation cycle; for the image matrix, a plurality of operation units concurrently acquiring the image data and performing a product operation on the image data and pre-stored weight data, to obtain intermediate data; summing the intermediate data to obtain an operation result; and compiling statistics on all the operation results to obtain a target operation result. The overall operation speed is increased in a unit time; the data read logic is simplified; and the bandwidth requirement of a single operation unit for data is reduced. A convolution operation of any size can be performed, and the convolution operation efficiency is improved, thereby increasing the image processing speed.Type: ApplicationFiled: January 20, 2022Publication date: June 6, 2024Inventors: Zhaofei PU, Nangeng ZHANG
-
Patent number: 11979150Abstract: A leakage compensation dynamic register, a data operation unit, a chip, a hash board, and a computing apparatus. The leakage compensation dynamic register comprises: an input terminal, an output terminal, a clock signal terminal, and an analog switch unit; a data latch unit for latching the data under control of the clock signal; and an output drive unit for inverting and outputting the data received from the data latch unit, the analog switch unit, the data latch unit, and the output drive unit being sequentially connected in series between the input terminal and the output terminal, and the analog switch unit and the data latch unit having a node therebetween, wherein the leakage compensation dynamic register further comprises a leakage compensation unit electrically connected between the node and the output terminal.Type: GrantFiled: June 29, 2020Date of Patent: May 7, 2024Assignee: Hangzhou Canaan Intelligence Information Technology Co, LtdInventors: Jian Zhang, Nangeng Zhang, Jinhua Bao, Jieyao Liu, Jingjie Wu, Shenghou Ma
-
Publication number: 20240121910Abstract: A computational heat dissipation structure includes a circuit board including a plurality of heating components; and a radiator provided corresponding to the circuit board; wherein a space between the adjacent heating components is negatively correlated with heat dissipation efficiency of a region where the adjacent heating components are located. Since the space between the adjacent heating components of the disclosure is negatively correlated with the heat dissipation efficiency of the region where the adjacent heating components are located, i.e., the higher the heat dissipation efficiency of the region where the adjacent heating components are located is, the smaller the space between the adjacent heating components in the region will be, the heat dissipation efficiencies corresponding to the heating components are balanced, and load of a fan is reduced.Type: ApplicationFiled: December 18, 2023Publication date: April 11, 2024Inventors: Ning ZHANG, Nangeng ZHANG
-
Patent number: 11895802Abstract: A computational heat dissipation structure includes a circuit board including a plurality of heating components; and a radiator provided corresponding to the circuit board; wherein a space between the adjacent heating components is negatively correlated with heat dissipation efficiency of a region where the adjacent heating components are located. Since the space between the adjacent heating components of the disclosure is negatively correlated with the heat dissipation efficiency of the region where the adjacent heating components are located, i.e., the higher the heat dissipation efficiency of the region where the adjacent heating components are located is, the smaller the space between the adjacent heating components in the region will be, the heat dissipation efficiencies corresponding to the heating components are balanced, and load of a fan is reduced.Type: GrantFiled: July 25, 2022Date of Patent: February 6, 2024Assignee: Canaan Creative Co., LTD.Inventors: Ning Zhang, Nangeng Zhang
-
Patent number: 11882669Abstract: A computational heat dissipation structure includes a circuit board including a plurality of heating components; and a radiator provided corresponding to the circuit board; wherein a space between the adjacent heating components is negatively correlated with heat dissipation efficiency of a region where the adjacent heating components are located. Since the space between the adjacent heating components of the disclosure is negatively correlated with the heat dissipation efficiency of the region where the adjacent heating components are located, i.e., the higher the heat dissipation efficiency of the region where the adjacent heating components are located is, the smaller the space between the adjacent heating components in the region will be, the heat dissipation efficiencies corresponding to the heating components are balanced, and load of a fan is reduced.Type: GrantFiled: April 17, 2023Date of Patent: January 23, 2024Assignee: Canaan Creative Co., LTD.Inventors: Ning Zhang, Nangeng Zhang
-
Patent number: 11870467Abstract: A data compression method, comprising: obtaining a plurality of values of a parameter and an occurrence probability of each of the plurality of values (S101); comparing the occurrence probability with a predetermined threshold, wherein values with the occurrence probability less than the predetermined threshold are first set of values, and values with the occurrence probability greater than or equal to the predetermined threshold are second set of values (S102); performing pretreatment on the first set of values (S103); and encoding the second set of values and the pretreated first set of values (S104). By means of the data compression method, the maximum codeword length can be effectively reduced, so as to reduce the requirements of a code table to the storage space.Type: GrantFiled: June 21, 2022Date of Patent: January 9, 2024Inventors: Bing Xu, Nangeng Zhang
-
Patent number: 11799456Abstract: A clock generation circuit, a latch using same, and a computing device are provided. The clock generation circuit includes an input end, configured to input a pulse signal; a first output end, configured to output a first clock signal; a second output end, configured to output a second clock signal; and an input drive circuit, a latch circuit, an edge shaping circuit, a feedback delay circuit, and an output drive circuit, where the input drive circuit, the latch circuit, the edge shaping circuit, the feedback delay circuit, and the output drive circuit are sequentially connected between the input end and the first output end as well as the second output end in series. A clock pulse can be effectively shaped, the use of a clock buffer can be reduced, and the correctness and accuracy of data transmission and latching can be improved.Type: GrantFiled: June 30, 2022Date of Patent: October 24, 2023Assignee: Canaan Creative (SH) Co., LTD.Inventors: Jieyao Liu, Nangeng Zhang, Jingjie Wu, Shenghou Ma
-
Publication number: 20230288972Abstract: A series circuit and a computing device include a power supply terminal for providing voltage for a plurality of chips disposed on the computing device; a ground terminal disposed at one end of each of the plurality of chips relative to the power supply terminal; and a first connection line for separately connecting a first predetermined number of chips of the plurality of chips in series, wherein a communication line is connected between adjacent chips of the first predetermined number of chips, a portion of the communication line is connected to a target connection point, which is disposed on the first connection line and adapted to the adjacent chips, via a third connection line, and the voltage at the target connection point is greater than or equal to the minimum voltage required for communication between the adjacent chips.Type: ApplicationFiled: May 22, 2023Publication date: September 14, 2023Inventors: Nangeng ZHANG, Min CHEN
-
Publication number: 20230289230Abstract: A method and apparatus for accelerating a convolutional neural network. The method comprises: splitting, according to rows, a weight matrix of a convolutional layer into a plurality of weight segments, and respectively caching the plurality of weight segments to a plurality of calculation units in a calculation unit array (step 301); reading a plurality of input data streams respectively corresponding to the plurality of weight segments, and inputting the plurality of input data streams in parallel into the plurality of calculation units (step 302), wherein the input data streams are formed by means of splicing a plurality of rows of data in an input feature map of the convolutional layer; and within each calculation unit, performing a sliding window operation and a multiply-accumulate computation on the input data streams on the basis of the cached weight segments, so as to obtain an output feature map of the convolutional layer (step 303).Type: ApplicationFiled: November 3, 2020Publication date: September 14, 2023Inventors: Bing XU, Nangeng ZHANG
-
Publication number: 20230284413Abstract: Disclosed is an immersed liquid cooling heat dissipation system, which comprises a liquid cooling module, an oil path circulation device, and a plurality of computing devices to undergo heat dissipation. Each computing device comprises a frame, a control module, a power module, and computing modules. The liquid cooling module comprise a first device slot tank, a second device slot tank, a return flow slot tank, and a flow-equalizing plate. The return flow slot tank is located between the first device slot tank and the second device slot tank, wherein the flow-equalizing plate is disposed in the first device slot tank and the second device slot tank, and the computing devices are disposed on the flow-equalizing plate. The frame is internally provided with a power module accommodating region used for accommodating the power module and a computing module accommodating region used for accommodating the computing module.Type: ApplicationFiled: July 12, 2021Publication date: September 7, 2023Applicant: CANAAN CREATIVE CO., LTD.Inventors: Yanbin ZHU, Nangeng ZHANG
-
Publication number: 20230280802Abstract: Disclosed is a frame, comprising a frame body and a control module. The frame body comprises a bottom plate, a top plate and a side plate, wherein the side plate is supported between the bottom plate and the top plate; a power source module accommodation area and a computing module accommodation area are defined between the bottom plate and the top plate; the bottom plate is provided with a frame slideway; and a power source module and a computing module respectively enter and exit by means of the frame slideway, and are fixed in the power source module accommodation area and the computing module accommodation area. The control module is connected to the top plate. The aim of the present invention is to provide a frame, which is used for an integrated computing unit, and which has a small and lightweight structure, a rational layout and is low cost.Type: ApplicationFiled: July 12, 2021Publication date: September 7, 2023Applicant: CANAAN CREATIVE CO., LTD.Inventors: Shaohua ZHANG, Nangeng ZHANG
-
Publication number: 20230273829Abstract: A dilated convolution acceleration calculation method and apparatus.Type: ApplicationFiled: November 3, 2020Publication date: August 31, 2023Inventors: Bing XU, Nangeng ZHANG
-
Publication number: 20230266068Abstract: A waste heat utilization system of an immersed liquid cooling heat dissipation device. The immersed liquid cooling heat dissipation device (100) comprises a liquid cooling tank (110). The liquid cooling tank (110) comprises an oil tank inlet (111) and an oil tank outlet (112). The system further comprises a waste heat utilization device (200). The waste heat utilization device (200) comprises a waste heat utilization body (210), a cold oil outlet (220) and a hot oil inlet (230), the cold oil outlet (220) and the hot oil inlet (230) being connected to the waste heat utilization body (210); the cold oil outlet (220) is connected with the oil tank inlet (111); the hot oil inlet (230) is connected to the oil tank outlet (112); the waste heat utilization body (210) is connected to a heat utilization end (300).Type: ApplicationFiled: July 8, 2021Publication date: August 24, 2023Applicant: CANAAN CREATIVE CO., LTD.Inventors: Huanlai ZHU, Nangeng ZHANG
-
Publication number: 20230254991Abstract: A computational heat dissipation structure includes a circuit board including a plurality of heating components; and a radiator provided corresponding to the circuit board; wherein a space between the adjacent heating components is negatively correlated with heat dissipation efficiency of a region where the adjacent heating components are located. Since the space between the adjacent heating components of the disclosure is negatively correlated with the heat dissipation efficiency of the region where the adjacent heating components are located, i.e., the higher the heat dissipation efficiency of the region where the adjacent heating components are located is, the smaller the space between the adjacent heating components in the region will be, the heat dissipation efficiencies corresponding to the heating components are balanced, and load of a fan is reduced.Type: ApplicationFiled: April 17, 2023Publication date: August 10, 2023Inventors: Ning ZHANG, Nangeng ZHANG
-
Publication number: 20230229215Abstract: Provided are a cloud computing power allocation method, a user terminal, a cloud computing power platform, and a system. The method includes: generating a computing power request including a computing power demand and account information of a computing power scheduling center; sending the computing power request to a cloud computing power platform, so that the cloud computing power platform sends a configuration instruction to a computing device cluster according to the computing power request where the configuration instruction is to allocate to the user terminal a target computing device meeting the computing power demand from the computing device cluster and configure based on the account information the target computing device to execute a computing task issued by the computing power scheduling center; and acquiring from the computing power scheduling center computing power information determined according to a computing result from the target computing device, by using the account information.Type: ApplicationFiled: May 24, 2021Publication date: July 20, 2023Inventors: Suncheng Gu, Nangeng Zhang
-
Patent number: 11698670Abstract: The present invention discloses a series circuit and a computing device, including: a power supply terminal for providing voltage for a plurality of chips disposed on the computing device; a ground terminal disposed at one end of each of the plurality of chips relative to the power supply terminal; and a first connection line for separately connecting a first predetermined number of chips of the plurality of chips in series, wherein a communication line is connected between adjacent chips of the first predetermined number of chips, a portion of the communication line is connected to a target connection point, which is disposed on the first connection line and adapted to the adjacent chips, via a third connection line, and the voltage at the target connection point is greater than or equal to the minimum voltage required for communication between the adjacent chips.Type: GrantFiled: December 28, 2021Date of Patent: July 11, 2023Assignee: HANGZHOU CANAAN INTELLIGENCE INFORMATION TECHNOLOGY CO, LTDInventors: Nangeng Zhang, Min Chen
-
Publication number: 20230135400Abstract: Provided in the present disclosure are a method and system for facial recognition. The method for facial recognition comprises: acquiring an image of an unshielded area of a face; and utilizing the image of the unshielded area of the face for facial recognition.Type: ApplicationFiled: November 27, 2020Publication date: May 4, 2023Inventors: Xingang ZHAI, Nangeng ZHANG
-
Publication number: 20230085383Abstract: Disclosed is a computing device, comprising a frame, a computing apparatus, a controller, a power source and a heat dissipation device, wherein the computing apparatus is arranged in the frame and comprises a power source interface; the power source comprises a power source connection end, and also comprises a connection box connected to the frame; the power source interface extends from the frame into the connection box; the power source connection end extends into the connection box; and the power source interface and the power source connection end are connected in the connection box via one or more conducting bars.Type: ApplicationFiled: November 21, 2022Publication date: March 16, 2023Inventors: Shaohua ZHANG, Nangeng ZHANG
-
Patent number: D1019629Type: GrantFiled: July 1, 2021Date of Patent: March 26, 2024Assignee: CANAAN CREATIVE CO., LTD.Inventors: Shaohua Zhang, Nangeng Zhang