Patents Examined by Ramon A Mercado
-
Patent number: 11416397Abstract: A cache flush request is received in a first phase of a persistent memory flush flow, where the first phase is initiated by a host processor, and the cache flush request requests that data in cache memory be flushed to persistent memory within a system. A cache flush response is sent in the first phase responsive to the cache flush request, where the cache flush response identifies whether an error is detected in the first phase. A memory buffer flush request is received in a second phase of the persistent memory flush flow, where the second phase is initiated by the host processor upon completion of the first phase, and the memory buffer flush request requests that data in buffers of persistent memory devices in the system be flushed to persistent memory. A memory buffer flush response is sent in the second phase responsive to the memory buffer flush response.Type: GrantFiled: February 20, 2020Date of Patent: August 16, 2022Assignee: Intel CorporationInventor: Mahesh S. Natu
-
Patent number: 11403043Abstract: A plurality of data blocks stored at a plurality of solid-state storage devices are identified. A portion of data is read from each data block of the plurality of data blocks. A corresponding property is determined for each data block of the plurality of data blocks based on reading the portion of the data. A set of data blocks from the plurality of data blocks is identified, wherein each data block of the set of data blocks is associated with a first corresponding property. The set of data blocks is stored at a data segment.Type: GrantFiled: October 15, 2019Date of Patent: August 2, 2022Assignee: Pure Storage, Inc.Inventors: Joern W. Engel, Yuhong Mao
-
Patent number: 11397532Abstract: The technology described herein enables data storage across storage volumes having fewer features than the storage volumes otherwise would. In one example, a method includes, in a data access system, identifying first data for storage on physical storage volumes. Each of the physical storage volumes corresponds to respective ones of data channels and control channels. The method further includes segmenting the first data into data segments corresponding to respective ones of the data channels and transferring the data segments as respective bit streams over the respective ones of the data channels to the respective ones of the physical storage volumes. The method also includes providing real-time write control to the physical storage volumes over respective ones of the control channels. The real-time write control directs a process for how the physical storage volumes write the data segments.Type: GrantFiled: October 15, 2019Date of Patent: July 26, 2022Assignee: QUANTUM CORPORATIONInventors: Suayb S. Arslan, Turguy Goker, Jaewook Lee
-
Patent number: 11397647Abstract: Embodiments of the present disclosure provide a hot backup system, a hot backup method, and a computer device. The hot backup system includes a centralized management module, a master server, a slave server and a delay server. The master server is configured to receive a write instruction sent by the centralized management module, and write first data to a database of the master server based on the write instruction. The slave server is configured to perform data synchronization with the master server in real time, receive a read instruction sent by the centralized management module, and send second data read based on the read instruction to the centralized management module to cause the centralized management module to send the second data to the service server.Type: GrantFiled: August 28, 2019Date of Patent: July 26, 2022Assignee: Apollo Intelligent Driving Technology (Beijing) Co., Ltd.Inventors: Bing Xiang, Xiaoliang Cong
-
Patent number: 11386002Abstract: Methods for enhancing the speed performance of solid-state storage devices using stream-aware garbage collection. A garbage collection method in according to an embodiment includes: searching, in each of a plurality of super-block groups G, for a super-block set C that satisfies: all of the super-blocks m within the super-block set C in the super-block group G contain a lesser amount of valid data than the other super-blocks within the super-block group G; and a total amount of valid data within the super-block set C are just enough to complete an entire super-block; selecting the super-block group G that includes the super-block set C with the maximum number of super-blocks m; and performing garbage collection on the super-block set C in the selected super-block group G.Type: GrantFiled: October 3, 2019Date of Patent: July 12, 2022Assignee: SCALEFLUX, INC.Inventors: Qi Wu, Tong Zhang
-
Patent number: 11385828Abstract: A method for obtaining a storage system capacity is provided. An available capacity that is of a storage system and that is associated with each stripe length is obtained based on an obtained stripe length that can be effectively configured. Therefore, an available capacity of a system is optimally selected.Type: GrantFiled: October 16, 2019Date of Patent: July 12, 2022Inventors: Ruliang Dong, Haixiao Jiang, Jinyi Zhang, Qiang Xue, Jianqiang Shen, Gongyi Wang
-
Patent number: 11386042Abstract: An apparatus in an illustrative embodiment comprises at least one processing device comprising a processor coupled to a memory. The apparatus is configured to maintain a snapshot tree data structure having a plurality of volume nodes corresponding to respective ones of (i) a root volume and (ii) multiple snapshots related directly or indirectly to the root volume. The apparatus is further configured to receive a request to read a data item from a given volume offset of a particular one of the volume nodes, to determine a set of data descriptors for the given volume offset, to determine a set of volume nodes of interest for the particular volume node, to determine a contribution set based at least in part on the set of data descriptors and the set of volume nodes of interest, to determine a read address for the data item as a function of the contribution set, and to read the data item from the read address.Type: GrantFiled: March 29, 2019Date of Patent: July 12, 2022Assignee: EMC IP Holding Company LLCInventors: Asaf Porath, Itay Keller, Yonatan Shtarkman, Michal Yarimi
-
Patent number: 11372769Abstract: The embodiments herein describe a multi-tenant cache that implements fine-grained allocation of the entries within the cache. Each entry in the cache can be allocated to a particular tenant—i.e., fine-grained allocation—rather than having to assign all the entries in a way to a particular tenant. If the tenant does not currently need those entries (which can be tracked using counters), the entries can be invalidated (i.e., deallocated) and assigned to another tenant. Thus, fine-grained allocation provides a flexible allocation of entries in a hardware cache that permits an administrator to reserve any number of entries for a particular tenant, but also permit other tenants to use this bandwidth when the reserved entries are not currently needed by the tenant.Type: GrantFiled: August 29, 2019Date of Patent: June 28, 2022Assignee: XILINX, INC.Inventors: Millind Mittal, Jaideep Dastidar
-
Patent number: 11366613Abstract: Embodiments of the present disclosure relate to a method and a device for writing data. The method may include: determining a target block for storing to-be-written data from a plurality of blocks divided in advance for a solid state disk in response to receiving a data write instruction, where the data write instruction includes information of the to-be-written data, and the solid state disk is divided into a plurality of blocks according to a band parameter; determining the to-be-written data based on the information of the to-be-written data; writing the to-be-written data into the target block using a single thread.Type: GrantFiled: June 8, 2020Date of Patent: June 21, 2022Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventor: Xinxing Wang
-
Patent number: 11360684Abstract: A data storage method includes: acquiring target data to be stored, and classifying refresh rates of the target data to be stored according to a front-end system; subjecting the target data to be stored with high refresh rates as classified and the target data to be stored with low refresh rates as classified to a Hash calculation to obtain a first type Hash value and a second type Hash value; determining storage data segments corresponding to the first type Hash value and the second type Hash value according to a preset storage data segment determination relationship, and storing the target data to be stored with high refresh rates and the target data to be stored with low refresh rates into the storage data segments corresponding to the first type Hash value and the second type Hash value, respectively.Type: GrantFiled: October 21, 2018Date of Patent: June 14, 2022Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.Inventors: Cheng Sun, Junfeng Ye, Yunhui Lai, Xianxian Luo, Juegang Long
-
Patent number: 11334254Abstract: A storage system includes solid-state storage devices and a storage controller operatively coupled to the solid-state storage devices, the storage controller including a processing device, to receive data to be programmed to a solid-state storage device of the plurality of solid-state storage devices. The processing device is further to determine a mode for programming a flash page storing the data at the solid-state storage device based on a required reliability for the data and an anticipated number of program/erase cycles associated with the data and transmit the data and the mode for programming the flash page storing the data to the solid-state storage device, wherein the mode causes the solid-state storage device to program the data to a first portion of the flash page and program parity data to a remaining portion of the flash page.Type: GrantFiled: March 29, 2019Date of Patent: May 17, 2022Assignee: Pure Storage, Inc.Inventor: Hari Kannan
-
Patent number: 11334498Abstract: A system and method for transferring data between a user space buffer in the address space of a user space process running on a virtual machine and a storage system are described. The user space buffer is represented as a file with a file descriptor. In the method, a file system proxy receives a request for I/O read or write from the user space process without copying data to be transferred. The file system proxy then sends the request to a file system server without copying data to be transferred. The file system server then requests that the storage system perform the requested I/O directly between the storage system and the user space buffer, the only transfer of data being between the storage system and the user space buffer.Type: GrantFiled: August 27, 2019Date of Patent: May 17, 2022Assignee: VMware, Inc.Inventors: Kamal Jeet Charan, Adrian Drzewiecki, Mounesh Badiger, Pushpesh Sharma, Wenguang Wang, Maxime Austruy, Richard P Spillane
-
Patent number: 11320992Abstract: A peripheral digital storage device has an interface allowing a connection to a self-service machine for performing maintenance operation to the self-service machine. The device provides a storage area divided into a set of partitions which are interpretable by the self-service machine as independent storage areas for file operation when connected to the self-service machine. A control unit which is configured to control the access to the partitions by refusing or granting the self-service machine an access to the partition depending on identity information receivable from the self-service machine for providing access to individual partitions for each assigned self-service machine connectable to the interface.Type: GrantFiled: May 22, 2019Date of Patent: May 3, 2022Assignee: Wincor Nixdorf International GmbHInventor: Carsten von der Lippe
-
Patent number: 11320998Abstract: The present disclosure discloses a method for assuring quality of service in a storage system, where a control node calculates, based on a quantity of remaining I/O requests of a target storage node in a unit time, a quantity of I/O requests required by a storage resource to reach a lower assurance limit in the unit time, and a quantity of I/O requests need to be processed by the target storage node for the storage resource in the unit time, a lower limit quantity of I/O requests that can be processed by the target storage node for the storage resource in the unit time; allocates, based on the lower limit quantity of I/O requests, a lower limit quantity of tokens of the storage resource on the target storage node in the unit time to the storage resource; and sends the lower limit quantity of tokens to the target storage node.Type: GrantFiled: February 12, 2021Date of Patent: May 3, 2022Assignees: Huawei Technologies Co., Ltd.Inventors: Si Yu, Junhui Gong, Peter Varman, Yuhan Peng
-
Patent number: 11314428Abstract: A file system in a storage system can store files received from a host in clusters of memory in the storage system. An end portion of a file may not use the entire cluster. As a result, the end clusters of the stored files can contain unused space. A system and method detects the unused space in such clusters and creates a virtual cluster from the unused space.Type: GrantFiled: February 23, 2021Date of Patent: April 26, 2022Assignee: Western Digital Technologies, Inc.Inventors: Narendhiran Chinnaanangur Ravimohan, Kavya Bathula
-
Patent number: 11301396Abstract: Technologies for accelerated edge data access and physical data security include an edge device that executes services associated with endpoint devices. An address decoder translates a virtual address generated by a service into an edge location using an edge translation table. If the edge location is not local, the edge device may localize the data and update the edge translation table. The edge device may migrate the service to another edge location, including migrating the edge translation table. The edge device may monitor telemetry and determine on a per-tenant basis whether a physical attack condition is present. If present, the edge device instructs a data resource to wipe an associated tenant data range. The determinations to localize remote data, to migrate the service, and/or whether the physical attack condition is present may be performed by an accelerator of the edge device. Other embodiments are described and claimed.Type: GrantFiled: March 29, 2019Date of Patent: April 12, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Ned M. Smith
-
Patent number: 11294588Abstract: Placing data within a storage device, including: receiving, by a storage device, information describing an expected longevity of data stored on the storage device; determining, by the storage device, a location for storing the data in dependence upon the expected longevity of the data; adjusting a garbage collection schedule in dependence upon data placement; and providing, to a storage array controller, garbage collection statistics.Type: GrantFiled: February 4, 2019Date of Patent: April 5, 2022Assignee: Pure Storage, Inc.Inventors: Ethan Miller, John Colgrove
-
Patent number: 11287986Abstract: Apparatus and methods are disclosed, including a controller circuit, a volatile memory, a non-volatile memory, and a reset circuit, where the reset circuit is configured to receive a reset signal from a host device and actuate a timer circuit. The timer circuit, where the timer circuit is configured to cause a storage device to reset after a threshold time period. The reset circuit is further configured to actuate the controller circuit to write data stored in the volatile memory to the non-volatile memory before the storage device is reset.Type: GrantFiled: December 31, 2018Date of Patent: March 29, 2022Assignee: Micron Technology, Inc.Inventor: David Aaron Palmer
-
Patent number: 11288206Abstract: Embodiment of this disclosure provide techniques to support memory paging between trust domains (TDs) in computer systems. In one embodiment, a processing device including a memory controller and a memory paging circuit is provided. The memory paging circuit is to insert a transportable page into a memory location associated with a trust domain (TD), the transportable page comprises encrypted contents of a first memory page of the TD. The memory paging circuit is further to create a third memory page associated with the TD by binding the transportable page to the TD, binding the transportable page to the TD comprises re-encrypting contents of the transportable page based on a key associated with the TD and a physical address of the memory location. The memory paging circuit is further to access contents of the third memory page by decrypting the contents of the third memory page using the key associated with the TD.Type: GrantFiled: March 26, 2020Date of Patent: March 29, 2022Assignee: Intel CorporationInventors: Hormuzd M. Khosravi, Baiju Patel, Ravi Sahita, Barry Huntley
-
Patent number: 11281377Abstract: Embodiments of the present disclosure provide methods, apparatuses and computer program products for managing a storage system. The storage system comprises a plurality of cache devices and a bottom storage device, and the plurality of cache devices comprise a first cache device group and a second cache device group. The method according to an aspect of the present disclosure comprises: receiving an input/output (I/O) request for the storage device; in response to determining that the I/O request triggers caching of target data, storing the target data from the storage device into the first cache device group if the I/O request is a read request; and storing the target data into the second cache device group if the I/O request is a write request. Embodiments of the present disclosure introduce a new architecture for cache devices so that the processing delay is shortened, and/or, the storage capacity can be used more effectively.Type: GrantFiled: April 24, 2020Date of Patent: March 22, 2022Assignee: EMC IP Holding Company LLCInventors: Bob Biao Yan, Bernie Bo Hu, Jia Huang, Jessica Jing Ye, Vicent Qian Wu