Patents by Inventor Jiangming JIN

Jiangming JIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230367509
    Abstract: The present disclosure relates to a system and a method for transmitting data between a plurality of modules. The system comprises: a first storage unit storing data to-be-transmitted between a plurality of modules; a second storage unit for storing identity information of a plurality of modules and permission information of a reading operation and/or a writing operation of the plurality of modules on the first storage unit; and a control unit connected to the first storage unit and the second storage unit and a plurality of modules, and controlling reading operation and/or writing operation of a plurality of modules on the first storage unit according to the identity information and the permission information stored in the second storage unit. A plurality of modules transmit data by executing the writing operation and/or the reading operation on the first storage unit under the control of the control unit.
    Type: Application
    Filed: May 11, 2023
    Publication date: November 16, 2023
    Inventors: Ziyue JIANG, Jiangming JIN
  • Publication number: 20230325512
    Abstract: The present application provides a method for invoking a graphics processing unit, a central processing unit and an apparatus. The method is applied to the central processing unit, the central processing unit having a first process and a second process running therein, the method comprising: in response to an invoking instruction for invoking a programming interface corresponding to an execution task of the first process, invoking by the first process a hijacking code corresponding to the programming interface,; running by the first process the hijacking code to send a running request to a second process, wherein the running request is used for instructing the second process to invoke the programming interface; and invoking a graphics processing unit by the second process by invoking the programming interface in response to the running request, and then processing an execution task by the graphics processing unit.
    Type: Application
    Filed: March 22, 2023
    Publication date: October 12, 2023
    Inventors: Pangbo SUN, Hao WU, Jiangming JIN
  • Publication number: 20230177336
    Abstract: The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.
    Type: Application
    Filed: February 1, 2023
    Publication date: June 8, 2023
    Inventors: Yuwei HU, Jiangming JIN, Lei SU, Dinghua LI
  • Publication number: 20230153254
    Abstract: A communication method, a related computing system and a storage medium are described. A communication method for a computing system runs at least one process, wherein the at least one process comprises a plurality of modules, and the method comprises: acquiring attribute information of each of the plurality of modules, wherein the plurality of modules at least comprise a first module and a second module; in response to determining that data is to be transmitted from the first module to the second module, comparing attribute information of the first module with attribute information of the second module; and selecting a communication channel for each of the first module and the second module according to the comparison, to transmit the data from the first module to the second module through the selected communication channel.
    Type: Application
    Filed: November 14, 2022
    Publication date: May 18, 2023
    Inventors: Yifan GONG, Jiangming JIN
  • Publication number: 20230153324
    Abstract: The present disclosure provides a service discovery method and apparatus, a computing device, and a storage medium, to solve the problem of node data falsification or tampering easily occurring in the prior art. The service discovery method comprises: in response to discovering a target node to be online or offline, creating, by a first online node, a block of a target node, and sending a data synchronization request to a second online node; in response to determining that the block is the latest block, informing, by the second online node, a plurality of third online nodes to respectively authenticate the permission of the target node; and performing statistics on permission authentication results of the target node authenticated by the plurality of third online nodes, and synchronizing the block to a block chain respectively maintained by each online node in response to an authentication passing rate satisfying a predetermined condition.
    Type: Application
    Filed: November 17, 2022
    Publication date: May 18, 2023
    Inventors: Yifan GONG, Jiangming JIN
  • Patent number: 11580377
    Abstract: The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: February 14, 2023
    Assignees: TU SIMPLE, INC., BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.
    Inventors: Yuwei Hu, Jiangming Jin, Lei Su, Dinghua Li
  • Publication number: 20220413702
    Abstract: The present application provides a data communication method, a communication system and a computer-readable storage medium. The method comprises: acquiring, by a data production module, target data to be sent to a data consumption module; determining in a preset GPU shared memory, by the data production module, a target memory block into which the target data is to be written, wherein the GPU shared memory is a predetermined GPU memory for data communication between the data production module and the data consumption module; writing, by the data production module, the target data into the target memory block to obtain memory address information corresponding to the target data; and sending, by the data production module, the memory address information to the data consumption module so that the data consumption module is operable to access the target data based on the memory address information.
    Type: Application
    Filed: June 24, 2022
    Publication date: December 29, 2022
    Inventors: Wei LIU, Hao WU, Jiangming JIN
  • Publication number: 20220414024
    Abstract: The present disclosure provides a communication method, a related communication apparatus, and a storage medium. The communication method includes: generating a first key by using a random sequence; encrypting data by using the first key to generate encrypted data; writing the encrypted data into a memory; encrypting the random sequence and a storage address of the encrypted data in the memory by using a public key; and sending the encrypted storage address and the encrypted random sequence to a second node from a first node.
    Type: Application
    Filed: June 24, 2022
    Publication date: December 29, 2022
    Inventors: Yifan GONG, Jiangming JIN
  • Publication number: 20220300356
    Abstract: The present disclosure relates to a method for inter-process communication, a related computing device and storage medium. The method comprises: receiving, by a first process, a request for a storage space of a first size; requesting, by the first process, a first number of shared memory blocks from an operating system, wherein the storage space of each shared memory block is not smaller than the first size; in response to the operating system allocating the first number of shared memory blocks, adding, by the first process, the first number of first nodes in a first linked list, wherein each of the first nodes corresponds to a respective one of the allocated shared memory blocks; and sending, by the first process, identifiers associated with the allocated shared memory blocks to a second process.
    Type: Application
    Filed: March 17, 2022
    Publication date: September 22, 2022
    Inventors: Hao WU, Jiangming JIN
  • Patent number: 11055144
    Abstract: The present disclosure provides a method, an apparatus and a system for multi-module scheduling, capable of solving the problem associated with inconsistency in data inputted to a computing module in the multi-module scheduling technique in the related art.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: July 6, 2021
    Assignee: TUSIMPLE, INC.
    Inventors: Yifan Gong, Siyuan Liu, Dinghua Li, Jiangming Jin, Lei Su, Yixin Yang, Wei Liu, Zehua Huang
  • Patent number: 10942771
    Abstract: The present disclosure provides a method, an apparatus and a system for multi-module scheduling, capable of solving at least one of the problems associated with the multi-module scheduling technique in the related art, i.e., inconsistency in data inputted to a computing module, and a significant delay or low throughput in data transmission between computing modules.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: March 9, 2021
    Assignee: TUSIMPLE, INC.
    Inventors: Yifan Gong, Siyuan Liu, Dinghua Li, Jiangming Jin, Lei Su, YiXin Yang, Wei Liu, Zehua Huang
  • Publication number: 20190317804
    Abstract: The present disclosure provides a method, an apparatus and a system for multi-module scheduling, capable of solving at least one of the problems associated with the multi-module scheduling technique in the related art, i.e., inconsistency in data inputted to a computing module, and a significant delay or low throughput in data transmission between computing modules.
    Type: Application
    Filed: February 14, 2019
    Publication date: October 17, 2019
    Inventors: Yifan GONG, Zehua HUANG, Jiangming JIN, Dinghua LI, Siyuan LIU, Wei LIU, Lei SU, YiXin YANG
  • Publication number: 20190286489
    Abstract: The present disclosure provides a method, an apparatus and a system for multi-module scheduling, capable of solving the problem associated with inconsistency in data inputted to a computing module in the multi-module scheduling technique in the related art.
    Type: Application
    Filed: February 14, 2019
    Publication date: September 19, 2019
    Inventors: Yifan GONG, Zehua HUANG, Jiangming JIN, Dinghua LI, Siyuan LIU, Wei LIU, Lei SU, YiXin YANG
  • Publication number: 20180373981
    Abstract: The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.
    Type: Application
    Filed: June 21, 2018
    Publication date: December 27, 2018
    Inventors: Yuwei HU, Jiangming JIN, Lei SU, Dinghua LI