Patents by Inventor Rui HAO
Rui HAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12218975Abstract: The present disclosure relates to data processing, and in particular, to a system for processing a full-stack network card task based on FPGA. The system includes: a network interface controller, configured to receive to-be-processed data, and offload a TCP/IP task from the to-be-processed data by a built-in TCP offload engine, to obtain first processed data; an SSL/TLS protocol processing module, configured to receive the first processed data, and offload an SSL/TLS protocol task from the first processed data, to obtain second processed data; a PR region, configured to receive the second processed data; and a reconfiguration module, configured to acquire, by a host, dynamic configuration information of the PR region, and configure the PR region based on the dynamic configuration information, so that the PR region offloads and processes computation-intensive tasks in the second processed data.Type: GrantFiled: September 29, 2022Date of Patent: February 4, 2025Assignee: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Linge Xiao, Rui Hao, Hongwei Kan
-
Publication number: 20240393112Abstract: Disclosed in the present application are a leveling method and a leveling system. The leveling method includes providing a leveling mechanism, setting up the plurality of the first containers and connecting two adjacent first containers to each other through one connecting pipe, filling each connecting pipe with observation liquid, measuring same volume of the observation liquid using the second container and fill the observation liquid into the plurality of the first containers separately, maintaining the plurality of the first containers at same horizontal height, defining a liquid level height in the first container as a height of observation liquid level in the first container relative to the first container, and comparing the liquid level heights in the plurality of the first containers, determining the target device is in a leveling state if the liquid level heights in the plurality of the first containers are the same.Type: ApplicationFiled: January 16, 2024Publication date: November 28, 2024Inventors: XING-CHUAN LI, BIN-BIN YANG, LIANG GAO, FANG-XING YANG, RUI-HAO XIAO
-
Publication number: 20240381559Abstract: An accommodating device and a data processing system are provided. The device includes a housing defining a first opening, a cover assembly covering the first opening, a first sealing member, and a second sealing member. The cover assembly includes a first cover and a second cover. The housing includes a first side and a second side. The first cover is rotatably connected to the first side, and the second cover is rotatably connected to the second side. The first sealing member is between the cover assembly and the housing. The second sealing member is connected to the first or the second cover. When each of the first cover and the second cover covers the first opening, the first sealing member abuts against the housing and each of the first cover and the second cover, the second sealing member abuts against the first cover and the second cover.Type: ApplicationFiled: August 30, 2023Publication date: November 14, 2024Inventors: FANG-XING YANG, Bin-Bin YANG, Liang GAO, Xing-Chuan LI, Rui-Hao XIAO
-
Publication number: 20240381562Abstract: A connector for installing a device in a host includes a main body for connecting the device, a connecting portion retractably connected to the main body, and a pull ring connected to the connecting portion. The pull ring is configured to be connected to a lifting component. The connecting portion is configured to partially protrude from the main body to connect to a cabinet of the host when the pull ring is not pulled by the lifting component, and retract into the main body to separate from the cabinet of the host when the pull ring is pulled by the lifting component. A host having the connector and a data processing equipment having the host are also provided.Type: ApplicationFiled: November 23, 2023Publication date: November 14, 2024Inventors: BIN-BIN YANG, FANG-XING YANG, XING-CHUAN LI, RUI-HAO XIAO, LIANG GAO
-
Publication number: 20240381563Abstract: A structure for identifying the position of a server in a rack, the structure includes a busbar, a shell, and a clip. The busbar and the shell mounted in the rack. The shell defines some connecting positions, each connecting positions defines some hole-points, the shell can define a hole at each hole-point, and the quantity and the locations of the holes in each connecting position is different. The clip is connected on the server, the clip includes some springs, when the server is positioned on one of the connecting positions, the spring conductively contacts the shell through the corresponded hole-point without the hole or does not contact the shell through the corresponded hole-point with the hole, the springs are configured for identifying the combination of the connecting position where the server locates to identify the position of the server. A server system using the structure is also disclosed.Type: ApplicationFiled: November 13, 2023Publication date: November 14, 2024Inventors: BIN-BIN YANG, XING-CHUAN LI, RUI-HAO XIAO, LIANG GAO, FANG-XING YANG
-
Publication number: 20240333766Abstract: The present disclosure relates to data processing, and in particular, to a system for processing a full-stack network card task based on FPGA. The system includes: a network interface controller, configured to receive to-be-processed data, and offload a TCP/IP task from the to-be-processed data by a built-in TCP offload engine, to obtain first processed data; an SSL/TLS protocol processing module, configured to receive the first processed data, and offload an SSL/TLS protocol task from the first processed data, to obtain second processed data; a PR region, configured to receive the second processed data; and a reconfiguration module, configured to acquire, by a host, dynamic configuration information of the PR region, and configure the PR region based on the dynamic configuration information, so that the PR region offloads and processes computation-intensive tasks in the second processed data.Type: ApplicationFiled: September 29, 2022Publication date: October 3, 2024Applicant: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Linge XIAO, Rui HAO, Hongwei KAN
-
Publication number: 20240294518Abstract: A compound as represented by formula I or a pharmaceutically acceptable form thereof, a pharmaceutical composition containing same, and the medical use thereof for preventing and/or treating HPK1-related diseases.Type: ApplicationFiled: May 11, 2022Publication date: September 5, 2024Applicant: Evopoint Biosciences Co., Ltd.Inventors: Yuchuan WU, Xiao LIU, Yonghua XIE, Xi CHEN, Rui HAO, Yonghan HU
-
Publication number: 20240281400Abstract: Provided are a communication method and system for a distributed heterogeneous acceleration platform, a device and a medium. The method includes: after starting a collaborative acceleration task, determining, by a first target heterogeneous acceleration card in a distributed heterogeneous acceleration platform, a second target heterogeneous acceleration card from the distributed heterogeneous acceleration platform by querying an information table corresponding to the collaborative acceleration task; generating, by the first target heterogeneous acceleration card, a target data packet according to a predefined data packet format, and sending the target data packet to the second target heterogeneous acceleration card via a PCIE interface; and parsing, by the second target heterogeneous acceleration card, the target data packet according to the data packet format, and executing a corresponding read operation or write operation according to a parsing result, so as to complete the collaborative acceleration task.Type: ApplicationFiled: June 1, 2022Publication date: August 22, 2024Applicant: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Hongwei KAN, Rengang LI, Yanwei WANG, Rui HAO, Jiangwei WANG, Dongdong SU, Kefeng ZHU, Le YANG
-
Publication number: 20240276760Abstract: A display substrate is provided, including: a base substrate including at least pixel and hole areas; sub-pixels arranged on the base substrate and in the pixel area; a hole in the hole area; a first barrier dam between the sub-pixels and the hole and at least partially surrounding the hole; a functional film layer, an organic material layer and an encapsulating structure sequentially arranged on the display substrate in a direction away from the base substrate, an orthographic projection of the organic material layer on the base substrate falling within the pixel area, the organic material layer including at least one film layer; and a filling structure, at least a portion of which is arranged between the hole and the first barrier dam. The filling structure and the at least one film layer of the organic material layer are in the same layer and include the same material.Type: ApplicationFiled: April 25, 2024Publication date: August 15, 2024Applicants: Chengdu BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.Inventors: Xin Zhang, Yupeng He, Yang Zhou, Wei Wang, Xiaofeng Jiang, Yu Wang, Lulu Yang, Yiyang Zhang, Guanghui Yang, Jiaming Lu, Rui Hao, Qun Ma, Pu Liu, Liudong Zhu, Qiang Huang, Bin He, Dinan Duan, Haiyong Bai, Xin Li, Ruiqi Wei
-
Patent number: 12026037Abstract: An information recording method, apparatus, and device, and a readable storage medium are provided. The method includes: when a server is started, determining a ring buffer in a Double Data Rate (DDR) of a Field-Programmable Gate Array (FPGA) acceleration card based on an OpenPower platform; determining a start address and an end address of the ring buffer and configuring the start address and the end address to the FPGA acceleration card; and during a running process of the server, recording preset debugging information to the ring buffer in real time, so as to perform fault location according to data in the ring buffer after a fault occurs in the server. According to the present application, during a running process of a server, preset debugging information is recorded using a DDR of an FPGA acceleration card.Type: GrantFiled: February 19, 2021Date of Patent: July 2, 2024Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.Inventors: Zhenhui Li, Rui Hao, Yanwei Wang
-
Patent number: 12004367Abstract: A display substrate, a manufacturing method thereof, and a display device are provided. The display substrate includes: a base substrate at least including a pixel area and a hole area; a plurality of sub-pixels arranged on the base substrate and located in the pixel area; a hole in the hole area; a first barrier dam arranged between the sub-pixels and the hole and at least partially surrounding the hole; an organic material layer including at least one film layer, wherein an orthographic projection of the organic material layer on the base substrate falls within the pixel area; and a filling structure, wherein at least a portion of the filling structure is arranged between the hole and the first barrier dam, and the filling structure and the at least one film layer of the organic material layer are located in the same layer and include the same material.Type: GrantFiled: June 30, 2020Date of Patent: June 4, 2024Assignees: CHENGDU BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Xin Zhang, Yupeng He, Yang Zhou, Wei Wang, Xiaofeng Jiang, Yu Wang, Lulu Yang, Yiyang Zhang, Guanghui Yang, Jiaming Lu, Rui Hao, Qun Ma, Pu Liu, Liudong Zhu, Qiang Huang, Bin He, Dinan Duan, Haiyong Bai, Xin Li, Ruiqi Wei
-
Patent number: 11928493Abstract: A method, system and apparatus for the sharing of an FPGA board by multiple virtual machines. Specifically, in the present application, a PCIE virtual layer (comprising a plurality of PCIE virtual sub-layers) and a virtual PCIE device are created; one virtual machine corresponds to one virtual PCIE device, multiple virtual PCIE devices correspond to one PCIE virtual sub-layer, and one PCIE virtual sub-layer corresponds to one FPGA board, thus enabling multiple virtual machines to share and use the FPGA board through one PCIE virtual sub-layer (that is, the multiple virtual machines share one PCIE bus, and same all access the FPGA board through the PCIE bus), thereby solving the problem of some of the virtual machines being unable to be started at the same time, and enhancing the experience effect of a user.Type: GrantFiled: August 30, 2019Date of Patent: March 12, 2024Assignee: ZHENGZHOU YUNHAI INFORMATION TECHNOLOGY CO., LTD.Inventors: Jiaheng Fan, Rui Hao
-
Patent number: 11868297Abstract: A far-end data migration device and method based on a FPGA cloud platform. The device includes a server, a switch, and a plurality of FPGA acceleration cards. The server transmits data to be accelerated to the FPGA acceleration cards by means of the switch. The FPGA acceleration cards are configured to perform a primary and/or secondary acceleration on the data, and are configured to migrate the accelerated data. The method includes: transmitting data to be accelerated to a FPGA acceleration card from a server by means of a switch; performing, by the FPGA acceleration card, a primary and/or secondary acceleration on the data to be accelerated; and migrating, by the FPGA acceleration card, the accelerated data.Type: GrantFiled: August 25, 2020Date of Patent: January 9, 2024Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Jiangwei Wang, Rui Hao, Hongwei Kan
-
Publication number: 20230281385Abstract: A field-programmable gate array (FPGA)-based FAST protocol decoding method, apparatus, and device, and a readable storage medium. The method acquires an actual XML template in real time and analyzes the actual XML template, generates a FAST protocol intermediate representation, and determines, according to preset decoding parameters, the maximum number of fields which are read at a single time, so as to generate a field matching state machine. Thus, the present disclosure can support a dynamically updated XML template, and allows flexible setting of the maximum number of fields according to an actual network bandwidth, and is applicable to disclosure scenarios of different network bandwidths. In a decoding process, the present disclosure realizes, by means of a field shift register and the field matching state machine, the function of reading and decoding a plurality of fields in parallel each time, significantly improving decoding efficiency.Type: ApplicationFiled: February 19, 2021Publication date: September 7, 2023Inventors: Guoqiang MEI, Rui HAO, Wei GUO
-
Publication number: 20230214286Abstract: An information recording method, apparatus, and device, and a readable storage medium are provided. The method includes: when a server is started, determining a ring buffer in a Double Data Rate (DDR) of a Field-Programmable Gate Array (FPGA) acceleration card based on an OpenPower platform; determining a start address and an end address of the ring buffer and configuring the start address and the end address to the FPGA acceleration card; and during a running process of the server, recording preset debugging information to the ring buffer in real time, so as to perform fault location according to data in the ring buffer after a fault occurs in the server. According to the present application, during a running process of a server, preset debugging information is recorded using a DDR of an FPGA acceleration card; therefore, when a down fault causes a Central Processing Unit (CPU) error of a server, recording of debugging information can also be ensured, thereby facilitating fault location.Type: ApplicationFiled: February 19, 2021Publication date: July 6, 2023Inventors: Zhenhui LI, Rui HAO, Yanwei WANG
-
Patent number: 11687242Abstract: The method includes: an FPGA board feeds back the quantity of controllers and the total quantity of DDR memories after receiving a hardware information acquisition request from a host; after a data space application request is received from the host, on the basis of the data space application request, perform data slice processing on data to be calculated, wherein the data space application request carries the dedicated application space capacity of each DDR and the data to be calculated, and the total quantity of slices of the data to be calculated is the same as the total quantity of DDR memories; and transmit each sliced data to a corresponding DDR space, and according to a data storage position of the sliced data in each DDR, read the data from the DDR memory space in parallel by means of the plurality of controllers and calculate same.Type: GrantFiled: February 19, 2021Date of Patent: June 27, 2023Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.Inventors: Jiaheng Fan, Yanwei Wang, Hongwei Kan, Rui Hao
-
Publication number: 20230195310Abstract: The method includes: an FPGA board feeds back the quantity of controllers and the total quantity of DDR memories after receiving a hardware information acquisition request from a host; after a data space application request is received from the host, on the basis of the data space application request, perform data slice processing on data to be calculated, wherein the data space application request carries the dedicated application space capacity of each DDR and the data to be calculated, and the total quantity of slices of the data to be calculated is the same as the total quantity of DDR memories; and transmit each sliced data to a corresponding DDR space, and according to a data storage position of the sliced data in each DDR, read the data from the DDR memory space in parallel by means of the plurality of controllers and calculate same.Type: ApplicationFiled: February 19, 2021Publication date: June 22, 2023Inventors: Jiaheng FAN, Yanwei WANG, Hongwei KAN, Rui HAO
-
Publication number: 20230110854Abstract: A display substrate includes a substrate and a display region, an isolation region, a peripheral region and a hole disposed on the substrate, wherein the display region surrounds the hole, the isolation region is disposed between the display region and the peripheral region, and the peripheral region surrounds and adjoins the hole; the isolation region is provided with an isolation column and a first inorganic structure, the isolation column is disposed at a side of the substrate, and the first inorganic structure is disposed at a side of the isolation column away from the substrate; and the peripheral region is provided with a structure layer, and the structure layer is disposed on the substrate and includes a first organic structure and a second inorganic structure disposed along a direction perpendicular to the substrate; wherein the first inorganic structure, the second inorganic structure and the first organic structure all are made of an insulation material, and the first inorganic structure and the secType: ApplicationFiled: May 25, 2021Publication date: April 13, 2023Inventors: Guanghui YANG, Jiaming LU, Rui HAO, Qun MA, Pu LIU, Liudong ZHU, Qiang HUANG, Bin HE, Dinan DUAN, Haiyong BAI, Xin LI, Ruiqi WEI
-
Publication number: 20230074436Abstract: In one aspect, antibodies that specifically bind to a human alpha-synuclein protein are provided. In some embodiments, an anti-alpha-synuclein antibody binds to monomeric human alpha-synuclein protein, oligomeric human alpha-synuclein protein, soluble human alpha-synuclein protein, human alpha-synuclein protein fibrils, and human alpha-synuclein protein that is phosphorylated at Ser129 (pSer129) with high affinity. In some embodiments, an anti-alpha-synuclein antibody can specifically bind to or immunodeplete alpha-synuclein protein. In some embodiments, an anti-alpha-synuclein antibody can prevent or inhibit alpha-synuclein seeding.Type: ApplicationFiled: September 18, 2020Publication date: March 9, 2023Inventors: Jing GUO, Rui HAO, Do Jin KIM, Suresh PODA, Rinkan SHUKLA, Adam P. SILVERMAN
-
Publication number: 20230045601Abstract: A far-end data migration device and method based on a FPGA cloud platform. The device includes a server, a switch, and a plurality of FPGA acceleration cards. The server transmits data to be accelerated to the FPGA acceleration cards by means of the switch. The FPGA acceleration cards are configured to perform a primary and/or secondary acceleration on the data, and are configured to migrate the accelerated data. The method includes: transmitting data to be accelerated to a FPGA acceleration card from a server by means of a switch; performing, by the FPGA acceleration card, a primary and/or secondary acceleration on the data to be accelerated; and migrating, by the FPGA acceleration card, the accelerated data.Type: ApplicationFiled: August 25, 2020Publication date: February 9, 2023Inventors: Jiangwei WANG, Rui HAO, Hongwei KAN