Patents by Inventor Xuechao Wei
Xuechao Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12197791Abstract: Embodiments of the present application relate to a method, a bridging device, a system and a medium of virtualization processing of a storage device. The method comprises: receiving an initial access request to a virtual disk sent by a virtual machine user; translating the virtual address corresponding to the virtual machine to a first physical address corresponding to a host based on a preconfigured address mapping relationship; translating the virtual access address to a second physical address corresponding to the storage device based on a preconfigured virtual partition mapping relationship; and generating a target access request based on the first physical address and the second physical address, and sending the target access request to the host, so as to cause the host to perform information interaction with the storage device based on the target access request.Type: GrantFiled: June 20, 2024Date of Patent: January 14, 2025Assignee: BEIJING VOLCANO ENGINE TECHNOLOGY CO., LTD.Inventors: Haixin Yu, Haozhong Zhang, Shoujing Bo, Jiali Jiang, Xuechao Wei
-
Publication number: 20240419367Abstract: Embodiments of the present application relate to a method, a bridging device, a system and a medium of virtualization processing of a storage device. The method comprises: receiving an initial access request to a virtual disk sent by a virtual machine user; translating the virtual address corresponding to the virtual machine to a first physical address corresponding to a host based on a preconfigured address mapping relationship; translating the virtual access address to a second physical address corresponding to the storage device based on a preconfigured virtual partition mapping relationship; and generating a target access request based on the first physical address and the second physical address, and sending the target access request to the host, so as to cause the host to perform information interaction with the storage device based on the target access request.Type: ApplicationFiled: June 20, 2024Publication date: December 19, 2024Inventors: Haixin YU, Haozhong ZHANG, Shoujing BO, Jiali JIANG, Xuechao WEI
-
Publication number: 20240372820Abstract: A data caching method comprises: receiving message data returned by a second device in response to a read command; dividing the message data into two paths of data according to a preset strategy, sending one of the two paths of data to a first random access memory for storage, and distributing the other path to a double data rate synchronous dynamic random access memory for storage through a first input first output queue, wherein a sum of a working bandwidth of the first random access memory and a working bandwidth of the double data rate synchronous dynamic random access memory is greater than or equal to a receiving bandwidth of the message data; and sending data stored in the first random access memory and data stored in the double data rate synchronous dynamic random access memory to a first device connected with network equipment.Type: ApplicationFiled: April 30, 2024Publication date: November 7, 2024Inventors: Bin WANG, Yaobao WANG, Yong YUAN, Shangyu LU, Xuechao WEI
-
Patent number: 11902346Abstract: The present disclosure provides a method and apparatus for processing a streaming media service, an electronic device, and a storage medium, and relates to the technical field of computers, particularly to technical fields such as industrial vision, deep learning, streaming media, and information flow. A specific implementation solution involves: acquiring registration information of an input source, the registration information including process information of a streaming media service process of the input source and streaming media address information of the input source; enabling the streaming media service process according to the process information; and controlling, by using the streaming media address information, the streaming media service process to process streaming media data of the input source.Type: GrantFiled: October 28, 2022Date of Patent: February 13, 2024Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Shuo Li, Xuechao Wei, Yonggao Fu, Jiabing Leng, Yawen Liu, Mingfa Zhu, Feng Huang
-
Publication number: 20240028392Abstract: The present disclosure discloses a batch computing system and an associated method. The batch computing system includes a memory, a task manager and an inference computer. The memory stores a shared model parameter set common to a plurality of tasks that is generated by fine tuning a shared model and a task-specific parameter set of each task. The inference computer receives a plurality of task requests, derives a data length and a designated task of each task request, and enables the task manager to read a task-specific parameter set and a shared model parameter set corresponding to each task request. The inference computer further assigns task requests corresponding to the shared model to a plurality of computation batches, performs, in batch, the common computation of designated tasks in each batch computation, and performs task-specific computation operations of the designated tasks of each batch computation.Type: ApplicationFiled: May 24, 2023Publication date: January 25, 2024Inventors: XUECHAO WEI, ZHE ZHOU, JIEJING ZHANG, SICHENG LI, YEN-KUANG CHEN, BIZHAO SHI
-
Publication number: 20230319124Abstract: The present disclosure provides a method and apparatus for processing a streaming media service, an electronic device, and a storage medium, and relates to the technical field of computers, particularly to technical fields such as industrial vision, deep learning, streaming media, and information flow. A specific implementation solution involves: acquiring registration information of an input source, the registration information including process information of a streaming media service process of the input source and streaming media address information of the input source; enabling the streaming media service process according to the process information; and controlling, by using the streaming media address information, the streaming media service process to process streaming media data of the input source.Type: ApplicationFiled: October 28, 2022Publication date: October 5, 2023Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Shuo LI, Xuechao WEI, Yonggao FU, Jiabing LENG, Yawen LIU, Mingfa ZHU, Feng HUANG
-
Patent number: 11604758Abstract: Systems and methods for automated systolic array design from a high-level program are disclosed. One implementation of a systolic array design supporting a convolutional neural network includes a two-dimensional array of reconfigurable processing elements arranged in rows and columns. Each processing element has an associated SIMD vector and is connected through a local connection to at least one other processing element. An input feature map buffer having a double buffer is configured to store input feature maps, and an interconnect system is configured to pass data to neighboring processing elements in accordance with a processing element scheduler. A CNN computation is mapped onto the two-dimensional array of reconfigurable processing elements using an automated system configured to determine suitable reconfigurable processing element parameters.Type: GrantFiled: November 12, 2020Date of Patent: March 14, 2023Assignee: Xilinx, Inc.Inventors: Peng Zhang, Cody Hao Yu, Xuechao Wei, Peichen Pan
-
Publication number: 20210081354Abstract: Systems and methods for automated systolic array design from a high-level program are disclosed. One implementation of a systolic array design supporting a convolutional neural network includes a two-dimensional array of reconfigurable processing elements arranged in rows and columns. Each processing element has an associated SIMD vector and is connected through a local connection to at least one other processing element. An input feature map buffer having a double buffer is configured to store input feature maps, and an interconnect system is configured to pass data to neighboring processing elements in accordance with a processing element scheduler. A CNN computation is mapped onto the two-dimensional array of reconfigurable processing elements using an automated system configured to determine suitable reconfigurable processing element parameters.Type: ApplicationFiled: November 12, 2020Publication date: March 18, 2021Inventors: Peng Zhang, Cody Hao Yu, Xuechao Wei, Peichen Pan
-
Patent number: 10838910Abstract: Systems and methods for automated systolic array design from a high-level program are disclosed. One implementation of a systolic array design supporting a convolutional neural network includes a two-dimensional array of reconfigurable processing elements arranged in rows and columns. Each processing element has an associated SIMD vector and is connected through a local connection to at least one other processing element. An input feature map buffer having a double buffer is configured to store input feature maps, and an interconnect system is configured to pass data to neighboring processing elements in accordance with a processing element scheduler. A CNN computation is mapped onto the two-dimensional array of reconfigurable processing elements using an automated system configured to determine suitable reconfigurable processing element parameters.Type: GrantFiled: April 25, 2018Date of Patent: November 17, 2020Assignee: FALCON COMPUTINGInventors: Peng Zhang, Cody Hao Yu, Xuechao Wei, Peichen Pan
-
Publication number: 20180314671Abstract: Systems and methods for automated systolic array design from a high-level program are disclosed. One implementation of a systolic array design supporting a convolutional neural network includes a two-dimensional array of reconfigurable processing elements arranged in rows and columns. Each processing element has an associated SIMD vector and is connected through a local connection to at least one other processing element. An input feature map buffer having a double buffer is configured to store input feature maps, and an interconnect system is configured to pass data to neighboring processing elements in accordance with a processing element scheduler. A CNN computation is mapped onto the two-dimensional array of reconfigurable processing elements using an automated system configured to determine suitable reconfigurable processing element parameters.Type: ApplicationFiled: April 25, 2018Publication date: November 1, 2018Inventors: Peng Zhang, Cody Hao Yu, Xuechao Wei, Peichen Pan