Patents by Inventor Zhu Pang
Zhu Pang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11829291Abstract: A key-value engine may perform garbage collection for a tree or hierarchical data structure on an append-only storage device with page mappings. The key-value engine may separate hot and cold data to reduce write amplification, track extent usage using a restricted or limited amount of memory, efficiently answer queries of valid extent usage, and adaptively or selectively defragment pages in snapshots in rounds of garbage collection.Type: GrantFiled: June 1, 2021Date of Patent: November 28, 2023Assignee: Alibaba Singapore Holding Private LimitedInventors: Rui Wang, Qingda Lu, Zhu Pang, Shuo Chen, Jiesheng Wu
-
Patent number: 11755427Abstract: A key-value engine of a storage system may perform a restart recovery after a system failure. The key-value engine may read a metadata log to locate a latest system checkpoint, and load a page mapping table from the latest system checkpoint. The key-value engine may replay to apply changes to the page mapping table from a system transaction log starting from a system transaction replay starting point. The key-value engine may further form one or more read-only replicas using an underlying file stream opened in a read-only mode during the recovery after the system failure to further facilitate fast recovery and provide fast response to user transactions that conduct read only transactions after the system failure.Type: GrantFiled: June 1, 2021Date of Patent: September 12, 2023Assignee: Alibaba Singapore Holding Private LimitedInventors: Qingda Lu, Rui Wang, Zhu Pang, Shuo Chen, Jiesheng Wu
-
Patent number: 11741073Abstract: Systems and methods discussed herein, based on a key-value data store including multiple-tiered sorted data structures in memory and storage, implement granularly timestamped concurrency control. The multiple-tiering of the key-value data store enables resolving the snapshot queries by returning data record(s) according to granularly timestamped snapshot lookup instead of singularly indexed snapshot lookup. Queries return a merged collection of records including updates from data structures in memory and in storage, such that a persistent storage transaction may refer to non-committed updates up to a timeframe defined by the snapshot read timestamp. This way, inconsistency is avoided that would result from merely reading data records committed in storage, without regard as to pending, non-committed updates thereto.Type: GrantFiled: June 1, 2021Date of Patent: August 29, 2023Assignee: Alibaba Singapore Holding Private LimitedInventors: Rui Wang, Zhu Pang, Qingda Lu, Shuo Chen, Jiesheng Wu
-
Patent number: 11640195Abstract: A power management system may provide power management recommendations to a computer system including a plurality of computing nodes (which may include processors, etc.), to cause the computing nodes to individually or collectively adjust power states or modes of respective processors to achieve power management of the computer system. The power management system may be provided with a power management framework that continuously utilizes direct and indirect service-level feedbacks to guide power management decisions. The power management system may employ a reinforcement learning algorithm to make power management decisions at a user level, and provide a fast decision overriding mechanism for platform events or service-requested performance boosts.Type: GrantFiled: October 12, 2020Date of Patent: May 2, 2023Assignee: Alibaba Group Holding LimitedInventors: Qingda Lu, Jun Song, Zhu Pang, Jiesheng Wu, Zhixing Ren
-
Patent number: 11544197Abstract: A mapping correspondence between memory addresses and request counts and a cache line flusher are provided, enabling selective cache flushing for persistent memory in a computing system to optimize write performance thereof. Random writes from cache memory to persistent memory are prevented from magnifying inherent phenomena of write amplification, enabling computing systems to implement persistent memory as random-access memory, at least in part. Conventional cache replacement policies may remain implemented in a computing system, but may be effectively overridden by operations of a cache line flusher according to example embodiments of the present disclosure preventing conventional cache replacement policies from being triggered. Implementations of the present disclosure may avoid becoming part of the critical path of a set of computer-executable instructions being executed by a client of cache memory, minimizing additional computation overhead in the critical path.Type: GrantFiled: September 18, 2020Date of Patent: January 3, 2023Assignee: Alibaba Group Holding LimitedInventors: Shuo Chen, Zhu Pang, Qingda Lu, Jiesheng Wu, Yuanjiang Ni
-
Publication number: 20220382760Abstract: A key-value store is provided, implementing multiple-tiered sorted data structures in memory and storage, including concurrent write buffers in memory, and page-level consolidation of updates on storage, where pages are trivially translated in physical-to-virtual address mapping. The key-value store is built on an indexed sorted data structure on storage, occupying much less storage space and incurring much less disk activity in consolidating updates than a conventional log-structured merge tree organized into files. Concurrent write buffers operate concurrently and independently so that data is committed from memory to storage in an efficient manner, while maintaining chronological sequence of delta pages. Trivial mapping allows mappings of a number of physical pages to be omitted, enabling page mapping tables to occupy less storage space, and simplifying processing workload of read operation retrievals from storage.Type: ApplicationFiled: June 1, 2021Publication date: December 1, 2022Inventors: Zhu Pang, Qingda Lu, Shuo Chen, Yikang Xu, Jiesheng Wu, Rui Wang
-
Publication number: 20220382651Abstract: A key-value engine of a storage system may perform a restart recovery after a system failure. The key-value engine may read a metadata log to locate a latest system checkpoint, and load a page mapping table from the latest system checkpoint. The key-value engine may replay to apply changes to the page mapping table from a system transaction log starting from a system transaction replay starting point. The key-value engine may further form one or more read-only replicas using an underlying file stream opened in a read-only mode during the recovery after the system failure to further facilitate fast recovery and provide fast response to user transactions that conduct read only transactions after the system failure.Type: ApplicationFiled: June 1, 2021Publication date: December 1, 2022Inventors: Qingda Lu, Rui Wang, Zhu Pang, Shuo Chen, Jiesheng Wu
-
Publication number: 20220382734Abstract: Systems and methods discussed herein, based on a key-value data store including multiple-tiered sorted data structures in memory and storage, implement granularly timestamped concurrency control. The multiple-tiering of the key-value data store enables resolving the snapshot queries by returning data record(s) according to granularly timestamped snapshot lookup instead of singularly indexed snapshot lookup. Queries return a merged collection of records including updates from data structures in memory and in storage, such that a persistent storage transaction may refer to non-committed updates up to a timeframe defined by the snapshot read timestamp. This way, inconsistency is avoided that would result from merely reading data records committed in storage, without regard as to pending, non-committed updates thereto.Type: ApplicationFiled: June 1, 2021Publication date: December 1, 2022Inventors: Rui Wang, Zhu Pang, Qingda Lu, Shuo Chen, Jiesheng Wu
-
Publication number: 20220382674Abstract: A key-value engine may perform garbage collection for a tree or hierarchical data structure on an append-only storage device with page mappings. The key-value engine may separate hot and cold data to reduce write amplification, track extent usage using a restricted or limited amount of memory, efficiently answer queries of valid extent usage, and adaptively or selectively defragment pages in snapshots in rounds of garbage collection.Type: ApplicationFiled: June 1, 2021Publication date: December 1, 2022Inventors: Rui Wang, Qingda Lu, Zhu Pang, Shuo Chen, Jiesheng Wu
-
Publication number: 20220113785Abstract: A power management system may provide power management recommendations to a computer system including a plurality of computing nodes (which may include processors, etc.), to cause the computing nodes to individually or collectively adjust power states or modes of respective processors to achieve power management of the computer system. The power management system may be provided with a power management framework that continuously utilizes direct and indirect service-level feedbacks to guide power management decisions. The power management system may employ a reinforcement learning algorithm to make power management decisions at a user level, and provide a fast decision overriding mechanism for platform events or service-requested performance boosts.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Inventors: Qingda Lu, Jun Song, Zhu Pang, Jiesheng Wu, Zhixing Ren
-
Publication number: 20220091989Abstract: A mapping correspondence between memory addresses and request counts and a cache line flusher are provided, enabling selective cache flushing for persistent memory in a computing system to optimize write performance thereof. Random writes from cache memory to persistent memory are prevented from magnifying inherent phenomena of write amplification, enabling computing systems to implement persistent memory as random-access memory, at least in part. Conventional cache replacement policies may remain implemented in a computing system, but may be effectively overridden by operations of a cache line flusher according to example embodiments of the present disclosure preventing conventional cache replacement policies from being triggered. Implementations of the present disclosure may avoid becoming part of the critical path of a set of computer-executable instructions being executed by a client of cache memory, minimizing additional computation overhead in the critical path.Type: ApplicationFiled: September 18, 2020Publication date: March 24, 2022Inventors: Shuo Chen, Zhu Pang, Qingda Lu, Jiesheng Wu, Yuanjiang Ni
-
Publication number: 20220027349Abstract: Indexed data structures are provided which are optimized for read and write performance in persistent memory of computing systems. Stored data may be searched by traversing an indexed data structure while still being sequentially written to persistent memory, so that the stored data may be accessed more efficiently than on non-volatile storage, while maintaining persistence against system failures such as power cycling. Mapping correspondences between leaf nodes of an indexed data structure and sequential elements of a sequential data structure may be stored in RAM, facilitating fast random access. Data writes are recorded as appended delta encodings which may be periodically compacted, avoiding write amplification inherent in persistent memory.Type: ApplicationFiled: July 24, 2020Publication date: January 27, 2022Inventors: Chen Shuo, Qingda Lu, Jiesheng Wu, Zhu Pang, Yuanjiang Ni
-
Patent number: 7316592Abstract: The present invention is directed to an electrical contact that incorporates a movable metal connection component such as a contact pin. The metal connection component is mounted within an insulating body. An electrically conducting path, from a contact head of the metal connection component to an interior of a base chassis is created only when a handset has been positioned within a cradle cavity of the base.Type: GrantFiled: May 16, 2003Date of Patent: January 8, 2008Assignee: VTech Telecommunications LimitedInventors: Chauk Hung Chan, Yong Yang Cai, Chu Zhu Pang
-
Publication number: 20040022388Abstract: The present invention is directed to an electrical contact that incorporates a movable metal connection component such as a contact pin. The metal connection component is mounted within an insulating body. An electrically conducting path, from a contact head of the metal connection component to an interior of a base chassis is created only when a handset has been positioned within a cradle cavity of the base.Type: ApplicationFiled: May 16, 2003Publication date: February 5, 2004Inventors: Chauk Hung Chan, Yong Yang Cai, Chu Zhu Pang