Patents by Inventor Xingzhi Wen
Xingzhi Wen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11921099Abstract: A method for quantitatively analyzing the reservoir formation of an ultra-deep evaporite-dolomite paragenesis system is performed as follows. A typical drilling core containing the evaporite-dolomite paragenesis system and a field section are observed. The logging data is subjected to single-factor analysis to determine the planar distribution regularity of the ultra-deep evaporite and the dolomite, and the analysis of sedimentary combination pattern and development evolution regularity is performed. The diagenetic system is determined, and the reservoir formation of the evaporite-dolomite paragenesis system is analyzed. Based on the above technical solutions, the property, the evolution path and the reservoir formation of sedimentation-diagenesis fluids in the evaporite-dolomite paragenesis system can be clarified.Type: GrantFiled: July 31, 2023Date of Patent: March 5, 2024Assignee: SOUTHWEST PETROLEUM UNIVERSITYInventors: Fei Huo, Xingzhi Wang, Huaguo Wen, Huiwen Huang, Yunbo Ruan, Liang Li
-
Patent number: 10725927Abstract: Aspects of the present disclosure describe a cache system that is co-managed by software and hardware that obviates use of a cache coherence protocol. In some embodiments, a cache would have the following two hardware interfaces that are driven by software: (1) invalidate or flush its content to the lower level memory hierarchy; (2) specify memory regions that can be cached. Software would be responsible for specifying what regions can be cacheable, and may flexibly change memory from cacheable and not, depending on the stage of the software program. In some embodiments, invalidation can be done in one cycle. Multiple valid bits can be kept for each tag in the memory. A vector “valid bit vec” comprising a plurality of bits can be used. Only one of two bits may be used as the valid bit to indicate that this region of memory is holding valid information for use by the software.Type: GrantFiled: December 4, 2018Date of Patent: July 28, 2020Assignee: Beijing Panyi Technology Co., Ltd.Inventor: Xingzhi Wen
-
Patent number: 10503541Abstract: Aspects of the present disclosure are presented for a multi-threaded system configured to efficiently handle dynamic thread spawning. When a child thread is spawned when executing a program, it may be possible that the parent thread needs the child thread ID for initialization. However, to get the child thread ID, the child thread may need to be generated, and once it is generated, it could be executed immediately, even before the initialization finishes, causing an error. The present disclosure introduces a memory efficient solution through use of a control circuit configured to control when child threads can be executed while still enabling the parent threads to obtain the child thread IDs.Type: GrantFiled: December 4, 2018Date of Patent: December 10, 2019Assignee: Beijing Panyi Technology Co., Ltd.Inventor: Xingzhi Wen
-
Publication number: 20190171574Abstract: Aspects of the present disclosure describe a cache system that is co-managed by software and hardware that obviates use of a cache coherence protocol. In some embodiments, a cache would have the following two hardware interfaces that are driven by software: (1) invalidate or flush its content to the lower level memory hierarchy; (2) specify memory regions that can be cached. Software would be responsible for specifying what regions can be cacheable, and may flexibly change memory from cacheable and not, depending on the stage of the software program. In some embodiments, invalidation can be done in one cycle. Multiple valid bits can be kept for each tag in the memory. A vector “valid bit vec” comprising a plurality of bits can be used. Only one of two bits may be used as the valid bit to indicate that this region of memory is holding valid information for use by the software.Type: ApplicationFiled: December 4, 2018Publication date: June 6, 2019Applicant: Beijing Panyi Technology Co., Ltd.Inventor: Xingzhi Wen
-
Publication number: 20190171482Abstract: Aspects of the present disclosure are presented for a multi-threaded system configured to efficiently handle dynamic thread spawning. When a child thread is spawned when executing a program, it may be possible that the parent thread needs the child thread ID for initialization. However, to get the child thread ID, the child thread may need to be generated, and once it is generated, it could be executed immediately, even before the initialization finishes, causing an error. The present disclosure introduces a memory efficient solution through use of a control circuit configured to control when child threads can be executed while still enabling the parent threads to obtain the child thread IDs.Type: ApplicationFiled: December 4, 2018Publication date: June 6, 2019Applicant: Beijing Panyi Technology Co., Ltd.Inventor: Xingzhi Wen
-
Patent number: 9858205Abstract: A system includes a cache and a cache-management component. The cache includes a plurality of cache lines that correspond to a plurality of device endpoints. The cache-management component is configured to receive a transfer request block (TRB) for data transfer involving a device endpoint. In response to a determination that the cache both (i) does not include a cache line assigned to the device endpoint and (ii) does not include an empty cache line, the cache-management component assigns, to the device endpoint, a last cache line that includes a most recently received TRB in the cache, and stores the received TRB to the last cache line.Type: GrantFiled: November 4, 2016Date of Patent: January 2, 2018Assignee: MARVELL WORLD TRADE LTD.Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Qunzhao Tian, Jeanne Q. Cai, Shaori Guo
-
Publication number: 20170052904Abstract: A system includes a cache and a cache-management component. The cache includes a plurality of cache lines that correspond to a plurality of device endpoints. The cache-management component is configured to receive a transfer request block (TRB) for data transfer involving a device endpoint. In response to a determination that the cache both (i) does not include a cache line assigned to the device endpoint and (ii) does not include an empty cache line, the cache-management component assigns, to the device endpoint, a last cache line that includes a most recently received TRB in the cache, and stores the received TRB to the last cache line.Type: ApplicationFiled: November 4, 2016Publication date: February 23, 2017Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Qunzhao Tian, Jeanne Q. Cai, Shaori Guo
-
Patent number: 9489311Abstract: Systems and methods are provided for cache management. An example system includes a cache and a cache-management component. The cache includes a plurality of cache lines corresponding to a plurality of device endpoints, a device endpoint including a portion of a universal-serial-bus (USB) device. The cache-management component is configured to receive first transfer request blocks (TRBs) for data transfer involving a first device endpoint and determine whether a cache line in the cache is assigned to the first device endpoint. The cache-management component is further configured to, in response to no cache line in the cache being assigned to the first device endpoint, determine whether the cache includes an empty cache line that contains no valid TRBs, and in response to the cache including an empty cache line, assign the empty cache line to the first device endpoint and store the first TRBs to the empty cache line.Type: GrantFiled: June 6, 2014Date of Patent: November 8, 2016Assignee: MARVELL WORLD TRADE LTD.Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Qunzhao Tian, Jeanne Q. Cai, Shaori Guo
-
Patent number: 9367511Abstract: System and methods are provided for managing universal-serial-bus (USB) data transfers. An example system includes a non-transitory computer-readable storage medium including a first scheduling queue for sorting endpoints and a host controller. The host controller is configured to: store a plurality of endpoints for data transfers to the storage medium, an endpoint corresponding to a portion of a USB device; sort the plurality of endpoints in a first order; generate a first transmission data unit including multiple original data packets, the original data packets being allocated to the plurality of endpoints based at least in part on the first order; and transfer the first transmission data unit.Type: GrantFiled: July 16, 2014Date of Patent: June 14, 2016Assignee: MARVELL WORLD TRADE LTD.Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Jeanne Q Cai, Yan Zhang, Shaori Guo
-
Publication number: 20150026369Abstract: System and methods are provided for managing universal-serial-bus (USB) data transfers. An example system includes a non-transitory computer-readable storage medium including a first scheduling queue for sorting endpoints and a host controller. The host controller is configured to: store a plurality of endpoints for data transfers to the storage medium, an endpoint corresponding to a portion of a USB device; sort the plurality of endpoints in a first order; generate a first transmission data unit including multiple original data packets, the original data packets being allocated to the plurality of endpoints based at least in part on the first order; and transfer the first transmission data unit.Type: ApplicationFiled: July 16, 2014Publication date: January 22, 2015Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Jeanne Q. Cai, Yan Zhang, Shaori Guo
-
Publication number: 20140365731Abstract: Systems and methods are provided for cache management. An example system includes a cache and a cache-management component. The cache includes a plurality of cache lines corresponding to a plurality of device endpoints, a device endpoint including a portion of a universal-serial-bus (USB) device. The cache-management component is configured to receive first transfer request blocks (TRBs) for data transfer involving a first device endpoint and determine whether a cache line in the cache is assigned to the first device endpoint. The cache-management component is further configured to, in response to no cache line in the cache being assigned to the first device endpoint, determine whether the cache includes an empty cache line that contains no valid TRBs, and in response to the cache including an empty cache line, assign the empty cache line to the first device endpoint and store the first TRBs to the empty cache line.Type: ApplicationFiled: June 6, 2014Publication date: December 11, 2014Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Qunzhao Tian, Jeanne Q. Cai, Shaori Guo
-
Patent number: 8209690Abstract: An Explicit Multi-Threading (XMT) system and method is provided for processing multiple spawned threads associated with SPAWN-type commands of an XMT program. The method includes executing a plurality of child threads by a plurality of TCUs including a first TCU executing a child thread which is allocated to it; completing execution of the child thread by the first TCU; announcing that the first TCU is available to execute another child thread; executing by a second TCU a parent child thread that includes a nested spawn-type command for spawning additional child threads of the plurality of child threads, wherein the parent child thread is related in a parent-child relationship to the child threads that are spawned in conjunction with the nested spawn-type command; assigning a thread ID (TID) to each child thread, wherein the TID is unique with respect to the other TIDs; and allocating a new child thread to the first TCU.Type: GrantFiled: January 19, 2007Date of Patent: June 26, 2012Assignee: University of MarylandInventors: Xingzhi Wen, Uzi Yehoshua Vishkin
-
Publication number: 20090125907Abstract: An Explicit Multi-Threading (XMT) system and method is provided for processing multiple spawned threads associated with SPAWN-type commands of an XMT program. The method includes executing a plurality of child threads by a plurality of TCUs including a first TCU executing a child thread which is allocated to it; completing execution of the child thread by the first TCU; announcing that the first TCU is available to execute another child thread; executing by a second TCU a parent child thread that includes a nested spawn-type command for spawning additional child threads of the plurality of child threads, wherein the parent child thread is related in a parent-child relationship to the child threads that are spawned in conjunction with the nested spawn-type command; assigning a thread ID (TID) to each child thread, wherein the TID is unique with respect to the other TIDs; and allocating a new child thread to the first TCU.Type: ApplicationFiled: January 19, 2007Publication date: May 14, 2009Inventors: Xingzhi Wen, Uzi Yehoshua Vishkin