Patents by Inventor Kun Ling

Kun Ling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11981616
    Abstract: A method for preparing 3,3?-diaminobenzidine, the method comprising the following steps: subjecting 4,4?-biphenol and N,N-dimethylsulfamoyl chloride to an esterification reaction in a specified solvent at 40-70° C. to obtain 4,4?-biphenyl bis(N,N-dimethylaminosulfonate) as a first intermediate; subjecting the 4,4?-biphenyl bis(N,N-dimethylaminosulfonate) to a chlorination reaction with a chlorinating reagent under acidic conditions to obtain 3,3?-dichloro-4,4?-biphenyl bis(N,N-dimethylaminosulfonate) as a second intermediate; subjecting the second intermediate 3,3?-dichloro-4,4?-biphenyl bis(N,N-dimethylaminosulfonate) to an ammonolysis reaction with anammoniation reagent in the presence of a combined catalyst to obtain a crude product of 3,3?,4,4?-tetraaminobiphenyl, wherein the combined catalyst is a mixture of proline, a cuprous salt and a phase transfer catalyst; and subjecting the crude product of 3,3?,4,4?-tetraaminobiphenyl to a post-treatment to obtain a purified 3,3?,4,4?-tetraaminobiphenyl product.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: May 14, 2024
    Assignees: Hubei Huida High-Tech Co., Ltd, Borun High-Tech Co., Ltd.
    Inventors: Yun Ling, Yongfang Li, Kun Wang, Lizhu Chen, Wei Yin, Jinying Zhang
  • Publication number: 20240109835
    Abstract: A method for preparing-3,3?-diaminobenzidine, the method comprising the following steps: subjecting 4,4?-biphenol and N,N-dimethylsulfamoyl chloride to an esterification reaction in a specified solvent at 40-70° C. to obtain 4,4?-biphenyl bis(N,N-dimethylaminosulfonate) as a first intermediate; subjecting the 4,4?-biphenyl bis(N,N-dimethylaminosulfonate) to a chlorination reaction with a chlorinating reagent under acidic conditions to obtain 3,3?-dichloro-4,4?-biphenyl bis(N,N-dimethylaminosulfonate) as a second intermediate; subjecting the second intermediate 3,3?-dichloro-4,4?-biphenyl bis(N,N-dimethylaminosulfonate) to an ammonolysis reaction with anammoniation reagent in the presence of a combined catalyst to obtain a crude product of 3,3?,4,4?-tetraaminobiphenyl, wherein the combined catalyst is a mixture of proline, a cuprous salt and a phase transfer catalyst; and subjecting the crude product of 3,3?,4,4?-tetraaminobiphenyl to a post-treatment to obtain a purified 3,3?,4,4?-tetraaminobiphenyl product.
    Type: Application
    Filed: January 12, 2022
    Publication date: April 4, 2024
    Applicants: HUBEI HUIDA HIGH-TECH CO., LTD., BORUN HIGH-TECH CO., LTD.
    Inventors: Yun LING, Yongfang LI, Kun WANG, Lizhu CHEN, Wei YIN, Jinying ZHANG
  • Publication number: 20240070403
    Abstract: An information-seeking dialogue system can be trained using a pipeline process having stages, or components, of passage retrieval (selecting passages relevant to a query from a corpus or knowledge base), re-ranking, and generating a response to the query based on one or more of the re-ranked passages. Each stage, or component, of the pipeline can be individually optimized based on ground truth data.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 29, 2024
    Inventors: Mei Ling Helen MENG, Xixin WU, Kun LI, Tianhua ZHANG, Liping TANG, Junan LI, Hongyuan LU
  • Patent number: 11822616
    Abstract: Disclosed are a method and an apparatus for performing an operation of a convolutional layer in a convolutional neural network.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: November 21, 2023
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Delin Li, Kun Ling, Liang Chen, Jianjun Li
  • Patent number: 11574031
    Abstract: Disclosed is a method for convolution calculation in a neural network, comprising: reading an input feature map, depthwise convolution kernels and pointwise convolution kernels from a dynamitic random access memory (DRAM); performing depthwise convolution calculations and pointwise convolution calculations according to the input feature map, the depthwise convolution kernels and the pointwise convolution kernels to obtain output feature values of a first predetermined number p of points on all pointwise convolution output channels; storing the output feature values of a first predetermined number p of points on all pointwise convolution output channels into an on-chip memory, wherein the first predetermined number p is determined according to at least one of available space in the on-chip memory, a number of the depthwise convolution calculation units, and width, height and channel dimensions of the input feature map; and repeating the above operation obtain output feature values of all points on all pointwis
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: February 7, 2023
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Liang Chen, Chang Huang, Kun Ling, Jianjun Li, Delin Li, Heng Luo
  • Patent number: 11568216
    Abstract: A method and an apparatus for adapting feature data in a convolutional neural network. The method includes selecting a plurality of consecutive layers; determining an expected number of subdata blocks and a layout position, width and height of each subdata block in an output feature data of a last layer; determining, for each current layer, a layout position, width, and height of each subdata block of an input feature data for the current layer according to the layout position, width, and height of each subdata block of the output feature data for the current layer; determining an actual position of each subdata block of the input feature data for a first layer in the input feature data for the first layer; and obtaining the expected number of subdata blocks of the input feature data for the first layer according to the actual position, width and height of each subdata block of the input feature data for the first layer.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: January 31, 2023
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Jianjun Li, Chang Huang, Liang Chen, Kun Ling, Delin Li
  • Patent number: 11500958
    Abstract: Disclosed are a method and an apparatus for performing convolution operation on folded feature data. The method comprises: reading the folded feature data provided to a convolution layer and an original convolution kernel from a dynamic random access memory (DRAM); pre-processing the folded feature data and the original convolution kernel; storing the pre-processed folded feature data into a static random-access memory (SRAM); folding the pre-processed original convolution kernel in at least one dimension of width or height according to a folding manner of the folded feature data to generate one or more folded convolution kernels corresponding to the original convolution kernel; storing the one or more folded convolution kernels in the SRAM; and reading the pre-processed folded feature data and the one or more folded convolution kernels from the SRAM into a calculation unit for convolving the pre-processed folded feature data with the one or more folded convolution kernels.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: November 15, 2022
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Delin Li, Kun Ling, Liang Chen, Jianjun Li
  • Patent number: 11468301
    Abstract: Disclosed are a method and an apparatus for performing an operation of a convolutional layer in a convolutional neural network. The method includes reading unfolded-feature-data and an original convolution kernel from DRAM, padding the unfolded-feature-data, folding the padded unfolded-feature-data in at least one dimension folded feature data, storing the folded feature data into a SRAM, folding the original convolution kernel in the at least one dimension to generate one or more folded convolution kernels, storing the one or more folded convolution kernels in the SRAM and reading the folded feature data and the one or more folded convolution kernels from the SRAM into a calculation circuit for performing a convolution operation on the folded feature data by using the one or more folded convolution kernels.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: October 11, 2022
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Delin Li, Kun Ling, Liang Chen, Jianjun Li
  • Patent number: 11461632
    Abstract: Disclosed are a method and an apparatus for adapting parameters of a neural network. The method includes selecting one or more dimensions for a weight parameter of each of at least one layer of the neural network, determining a dimension value and a corresponding target value in each dimension of the weight parameter, and padding the weight parameter in a case where the dimension value in at least one dimension of the weight parameter is less than the corresponding target value, the dimension value in each dimension of the weight parameter after the padding being equal to the corresponding target value.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: October 4, 2022
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Kun Ling, Liang Chen, Jianjun Li, Delin Li, Chang Huang
  • Patent number: 11429836
    Abstract: Disclosed is an apparatus for performing a convolution operation in a convolutional neural network. The apparatus may comprise a selector for selecting one or more nonzero elements of a weight parameter, a selector for selecting a data item(s) corresponding to selected nonzero elements in input feature data, and a calculator unit for performing an operation. The apparatus may realize the convolution operation in a sparsified convolutional neural network efficiently through the hardware.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: August 30, 2022
    Assignee: NANJING HORIZON ROBOTICS TECHNOLOGY CO., LTD.
    Inventors: Chang Huang, Liang Chen, Heng Luo, Kun Ling, Honghe Tan
  • Patent number: 11360818
    Abstract: A method for data management is provided. The method comprises: storing the plurality of items in a contiguous space within the memory, executing an instruction containing an address and a size that together identify the contiguous space to transmit the plurality of items from the main memory to a random-access memory (RAM) on a chip, and the chip includes a computing unit comprising a plurality of multipliers; and instructing the computing unit on the chip to: retrieve multiple of the plurality of items from the RAM; and perform a plurality of parallel operations using the plurality of multipliers with the multiple items to yield output data.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: June 14, 2022
    Assignee: BEIJING HORIZON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Chang Huang, Liang Chen, Kun Ling, Feng Zhou
  • Patent number: 11360819
    Abstract: A method for data management is provided. The method comprises: storing the plurality of items in a contiguous space within the memory, executing an instruction containing an address and a size that together identify the contiguous space to transmit the plurality of items from the main memory to a random-access memory (RAM) on a chip, and the chip includes a computing unit comprising a plurality of multipliers; and instructing the computing unit on the chip to: retrieve multiple of the plurality of items from the RAM; and perform a plurality of parallel operations using the plurality of multipliers with the multiple items to yield output data.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: June 14, 2022
    Assignee: BEIJING HORIZON INFORMATION TECHNOLOGY CO. LTD
    Inventors: Chang Huang, Liang Chen, Kun Ling, Feng Zhou
  • Patent number: 11163686
    Abstract: Disclosed are a method and an apparatus for accessing tensor data. The method may include determining a first row address in a first memory where one or more first data items to be accessed in a logical structure of the tensor data are stored at the first row address, copying data items at the first row address in the first memory to a first buffer row of a first buffer, moving each first data item in the first buffer row of the first buffer to a corresponding location at least in a first buffer row of a second buffer, and storing data items in the first buffer row of the second buffer into corresponding target locations in the second memory.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: November 2, 2021
    Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.
    Inventors: Chen Sun, Zhenjiang Wang, Liang Chen, Kun Ling
  • Publication number: 20200192803
    Abstract: Disclosed are a method and an apparatus for accessing tensor data. The method may include determining a first row address in a first memory where one or more first data items to be accessed in a logical structure of the tensor data are stored at the first row address, copying data items at the first row address in the first memory to a first buffer row of a first buffer, moving each first data item in the first buffer row of the first buffer to a corresponding location at least in a first buffer row of a second buffer, and storing data items in the first buffer row of the second buffer into corresponding target locations in the second memory.
    Type: Application
    Filed: December 16, 2019
    Publication date: June 18, 2020
    Inventors: Chen Sun, Zhenjiang Wang, Liang Chen, Kun Ling
  • Publication number: 20200065154
    Abstract: A method for data management is provided. The method comprises: storing the plurality of items in a contiguous space within the memory, executing an instruction containing an address and a size that together identify the contiguous space to transmit the plurality of items from the main memory to a random-access memory (RAM) on a chip, and the chip includes a computing unit comprising a plurality of multipliers; and instructing the computing unit on the chip to: retrieve multiple of the plurality of items from the RAM; and perform a plurality of parallel operations using the plurality of multipliers with the multiple items to yield output data.
    Type: Application
    Filed: March 6, 2019
    Publication date: February 27, 2020
    Inventors: Chang Huang, Liang Chen, Kun Ling, Feng Zhou
  • Publication number: 20190294438
    Abstract: Systems and methods of data processing are provided. The method comprises receiving an input data to be processed by a series of operations, identifying a first operation from the series of operations, selecting at least one second operation from the series of operations to be grouped with the first operation based at least in part on an amount of an input data and an output data of the grouped operations and the capacity of the memory unit, and processing a portion of the input data of the grouped operations. An efficiency of the series of data operation can be improved by ensuring the input data and output data of any data operation are both stored in the memory unit.
    Type: Application
    Filed: March 22, 2019
    Publication date: September 26, 2019
    Inventors: Zhenjiang WANG, Jianjun LI, Liang CHEN, Kun LING, Delin LI, Chen SUN
  • Publication number: 20190197083
    Abstract: Disclosed is a method for convolution calculation in a neural network, comprising: reading an input feature map, depthwise convolution kernels and pointwise convolution kernels from a dynamitic random access memory (DRAM); performing depthwise convolution calculations and pointwise convolution calculations according to the input feature map, the depthwise convolution kernels and the pointwise convolution kernels to obtain output feature values of a first predetermined number p of points on all pointwise convolution output channels; storing the output feature values of a first predetermined number p of points on all pointwise convolution output channels into an on-chip memory, wherein the first predetermined number p is determined according to at least one of available space in the on-chip memory, a number of the depthwise convolution calculation units, and width, height and channel dimensions of the input feature map; and repeating the above operation obtain output feature values of all points on all pointwis
    Type: Application
    Filed: December 17, 2018
    Publication date: June 27, 2019
    Inventors: Liang CHEN, Chang HUANG, Kun LING, Jianjun LI, Delin LI, Heng LUO
  • Publication number: 20190188237
    Abstract: Disclosed is a method for convolution calculation in a neural network, comprising: reading an input feature map, depthwise convolution kernels and pointwise convolution kernels from a dynamic random access memory (DRAM); performing depthwise convolution calculations and pointwise convolution calculations by depthwise convolution calculation units and pointwise convolution calculation units, according to the input feature map, the depthwise convolution kernels and the pointwise convolution kernels to obtain output feature values of a first predetermined number p of points on all pointwise convolution output channels; storing the output feature values of a first predetermined number p of points on all pointwise convolution output channels into an on-chip memory; and repeating above operation to obtain output feature values of all points on all point wise convolution output channels. Therefore, the storage space for storing intermediate results may be reduced.
    Type: Application
    Filed: December 17, 2018
    Publication date: June 20, 2019
    Inventors: Liang CHEN, Chang HUANG, Kun LING, Jianjun LI, Delin LI, Heng LUO
  • Publication number: 20190179674
    Abstract: A method for data management is provided. The method comprises: storing the plurality of items in a contiguous space within the memory, executing an instruction containing an address and a size that together identify the contiguous space to transmit the plurality of items from the main memory to a random-access memory (RAM) on a chip, and the chip includes a computing unit comprising a plurality of multipliers; and instructing the computing unit on the chip to: retrieve multiple of the plurality of items from the RAM; and perform a plurality of parallel operations using the plurality of multipliers with the multiple items to yield output data.
    Type: Application
    Filed: January 10, 2019
    Publication date: June 13, 2019
    Inventors: Chang Huang, Liang Chen, Kun Ling, Feng Zhou
  • Patent number: D967546
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: October 18, 2022
    Inventor: Kun Ling