Patents by Inventor HUAJIN SUN
HUAJIN SUN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12321304Abstract: Disclosed are a data transmission method, system and apparatus, and a device and a computer-readable storage medium. The method is applied to a data transmission system, and the system comprises a host device, devices for transmission and a transmission initiation device, wherein the host device can hold an initial configuration table, which contains device information of the devices for transmission, and sends the table to the transmission initiation device for selection and completion; the transmission initiation device fills in, according to its own demand, an initial configuration table of a target device with necessary information for establishing transmission, and returns an obtained complete configuration table to the host device; and the host device then configures the target device by using a complete information table, and returns the complete information table to the transmission initiation device after completing the configuration.Type: GrantFiled: March 23, 2023Date of Patent: June 3, 2025Assignee: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Xingyuan Li, Jian Cui, Jiang Wang, Huajin Sun, Shuqing Li
-
Patent number: 12271302Abstract: Disclosed are a method, system and apparatus for data transmission, and a storage medium, which relate to the field of data transmission, and are used for transmitting data. Address mapping logic is provided in a storage array card, and a host maps a storage address space thereof to the storage array card by the address mapping logic of the storage array card; and after a data transmission instruction is sent to an NVMeSSD by a hard disk control mapping address in the storage array card, the NVMeSSD can directly perform data transmission on the basis of the data transmission instruction and a host storage mapping address in the storage array card, that is, the NVMeSSD can directly perform data transmission with the storage address space inside the host.Type: GrantFiled: March 24, 2023Date of Patent: April 8, 2025Assignee: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Xingyuan Li, Jiang Wang, Huajin Sun, Shuqing Li
-
Patent number: 12222859Abstract: A method and system for high-speed caching of data writing, a device and a storage medium. The method includes: in response to receiving a data-writing operating instruction emitted by a host, creating a controlling page table and filling sequentially a plurality of control blocks into the controlling page table; submitting an entry pointer of a first instance of the control blocks to a work-queue scheduling engine, to execute tasks corresponding to the plurality of control blocks alternately in the work-queue scheduling engine; sending in advance a completion response to the host and notifying a firmware to perform subsequent processing and falling-into-disk of data; and in response to ending of execution of a task corresponding to a last one instance of the control blocks, releasing a used resource of the controlling page table.Type: GrantFiled: May 26, 2022Date of Patent: February 11, 2025Assignee: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Jiang Wang, Shuqing Li, Huajin Sun
-
Publication number: 20250004976Abstract: Disclosed are a data transmission method, system and apparatus, and a device and a computer-readable storage medium. The method is applied to a data transmission system, and the system comprises a host device, devices for transmission and a transmission initiation device, wherein the host device can hold an initial configuration table, which contains device information of the devices for transmission, and sends the table to the transmission initiation device for selection and completion; the transmission initiation device fills in, according to its own demand, an initial configuration table of a target device with necessary information for establishing transmission, and returns an obtained complete configuration table to the host device; and the host device then configures the target device by using a complete information table, and returns the complete information table to the transmission initiation device after completing the configuration.Type: ApplicationFiled: March 23, 2023Publication date: January 2, 2025Inventors: Xingyuan LI, Jian CUI, Jiang WANG, Huajin SUN, Shuqing LI
-
Publication number: 20240419586Abstract: Disclosed are a method, system and apparatus for data transmission, and a storage medium, which relate to the field of data transmission, and are used for transmitting data. Address mapping logic is provided in a storage array card, and a host maps a storage address space thereof to the storage array card by the address mapping logic of the storage array card; and after a data transmission instruction is sent to an NVMeSSD by a hard disk control mapping address in the storage array card, the NVMeSSD can directly perform data transmission on the basis of the data transmission instruction and a host storage mapping address in the storage array card, that is, the NVMeSSD can directly perform data transmission with the storage address space inside the host.Type: ApplicationFiled: March 24, 2023Publication date: December 19, 2024Applicant: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Xingyuan LI, Jiang WANG, Huajin SUN, Shuqing LI
-
Publication number: 20240264940Abstract: A method and system for high-speed caching of data writing, a device and a storage medium. The method includes: in response to receiving a data-writing operating instruction emitted by a host, creating a controlling page table and filling sequentially a plurality of control blocks into the controlling page table; submitting an entry pointer of a first instance of the control blocks to a work-queue scheduling engine, to execute tasks corresponding to the plurality of control blocks alternately in the work-queue scheduling engine; sending in advance a completion response to the host and notifying a firmware to perform subsequent processing and falling-into-disk of data; and in response to ending of execution of a task corresponding to a last one instance of the control blocks, releasing a used resource of the controlling page table.Type: ApplicationFiled: May 26, 2022Publication date: August 8, 2024Inventors: Jiang WANG, Shuqing LI, Huajin SUN
-
Publication number: 20240256477Abstract: The present disclosure relates to the technical field of computers, and provides a method and apparatus for processing DMA, and a computer-readable storage medium. The method includes: acquiring a DMA task used for processing DMA, and acquiring state information of the task; judging, according to stages contained in the task, whether the state information meets a first preset condition of a descriptor DMA queue and/or a second preset condition of a data DMA queue, wherein the descriptor DMA queue is used for processing a first stage contained in the task, and the data DMA queue is used for processing a second stage contained in the task; and judging whether to perform subsequent processing on the task.Type: ApplicationFiled: April 29, 2022Publication date: August 1, 2024Applicant: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Shuqing LI, Jiang WANG, Huajin SUN
-
Publication number: 20240143392Abstract: The preset application discloses a task scheduling method, including: in response to receiving an issued task, dividing, by a parser, the task into sub-tasks and generating a sub-task list, a task parameter corresponding to each sub-task is recorded in the sub-task list, the task parameter includes a start phase of a next sub-task; sending, by a scheduler, the task parameter of a sub-task to be processed in the sub-task list to a corresponding sub-engine; executing, by the corresponding sub-engine, a corresponding sub-task to be processed; sending a notification to the scheduler in response to an operating phase when the corresponding sub-engine executes the corresponding sub-task to be processed being the same as the start phase in the received task parameter; and in response to the notification being detected by the scheduler, returning to the step of sending the task parameter of a sub-task to be processed to a corresponding sub-engine.Type: ApplicationFiled: January 28, 2022Publication date: May 2, 2024Inventors: Shuqing LI, Jiang WANG, Huajin SUN
-
Patent number: 11809807Abstract: A method for processing data overflow in a decompression process, includes: decompressing an original text, and detecting whether a data overflow event occurs in the decompression process; in response to detecting the data overflow event, storing first data obtained by decompression in a host cache into a target memory, and closing a data read-in port of a decoding engine; decompressing data which is being decompressed in the decoding engine to obtain second data, and storing the second data into a cache of the decoding engine; calculating a position of the decompressed data in the original text; obtaining, on the basis of the position, data which is not decompressed in the original text, re-decompressing the data which is not decompressed to obtain third data, and storing the second data into the target memory; and splicing the first data, the second data, and the third data to obtain complete decompressed data.Type: GrantFiled: January 26, 2022Date of Patent: November 7, 2023Assignee: SHANDONG YINGXIN COMPUTER TECHNOLOGIES CO., LTD.Inventors: Shuqing Li, Jiang Wang, Huajin Sun
-
Patent number: 10089263Abstract: A processor is disclosed and includes at least one core including a first core, and interrupt delay logic. The interrupt delay logic is to receive a first interrupt at a first time and delay the first interrupt from being processed by a first time delay that begins at the first time, unless the first interrupt is pending at a second time when a second interrupt is processed by the first core. If the first interrupt is pending at the second time, the interrupt delay logic is to indicate to the first core to begin to process the first interrupt prior to completion of the first time delay. Other embodiments are disclosed and claimed.Type: GrantFiled: March 24, 2014Date of Patent: October 2, 2018Assignee: Intel CorporationInventors: Thiam Wah Loh, Gautham N. Chinya, Per Hammarlund, Reza Fortas, Hong Wang, Huajin Sun
-
Publication number: 20170161096Abstract: A processor is disclosed and includes at least one core including a first core, and interrupt delay logic. The interrupt delay logic is to receive a first interrupt at a first time and delay the first interrupt from being processed by a first time delay that begins at the first time, unless the first interrupt is pending at a second time when a second interrupt is processed by the first core. If the first interrupt is pending at the second time, the interrupt delay logic is to indicate to the first core to begin to process the first interrupt prior to completion of the first time delay. Other embodiments are disclosed and claimed.Type: ApplicationFiled: March 24, 2014Publication date: June 8, 2017Inventors: THIAM WAH LOH, GAUTHAM N. CHINYA, PER HAMMARLUND, REZA FORTAS, HONG WANG, HUAJIN SUN