Patents Examined by Tracy C Chan
-
Patent number: 11321229Abstract: A flash array provided in embodiments includes a controller and a solid state disk group. The controller counts a data volume of invalid data included in each of a plurality of stripes, and select at least one target stripe from the plurality of stripes. The target stripe is a stripe that includes a maximum volume of invalid data among the plurality of stripes. Then, the controller instructs the solid state disk group to move valid data in the target stripe, and instructs the solid state disk group to delete a correspondence between a logical address of the target stripe and an actual address of the target stripe. This can reduce write amplification, thereby prolonging a life span of the solid state disk.Type: GrantFiled: December 11, 2020Date of Patent: May 3, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Guiyou Pu
-
Patent number: 11314419Abstract: Techniques for managing disks involve: determining a current usage parameter associated with each of a plurality of disk sets, the current usage parameter indicating usage associated with a capability of each of the plurality of disk sets, and the capability comprising at least one of the following: a number of permitted accesses per time unit and a number of permitted writes per time unit; imbalance degree associated with the plurality of disk sets, the first imbalance degree indicating a difference in the current usage parameters of the plurality of disk sets; and causing data in at least one disk slice of a first disk set to be moved to a second disk set of the plurality of disk sets, so as to lower down the first imbalance degree. In this way, a better balance can be achieved among performances of respective disks after adjustment.Type: GrantFiled: May 21, 2020Date of Patent: April 26, 2022Assignee: EMC IP Holding Company LLCInventors: Liang Huang, Xianlong Liu, Ruipeng Yang, Xiaoliang Zhao, Changyong Yu
-
Patent number: 11307981Abstract: A disclosed method may include (1) mapping a block of shared memory to a plurality of processes running on a computing device, (2) determining, for a process within the plurality of processes, a local pointer that references a specific portion of the block of shared memory from a shared memory pointer that is shared across the plurality of processes by (A) identifying, within the shared memory pointer, a block number assigned to the block of shared memory and (B) identifying, within the shared memory pointer, an offset that corresponds to the specific portion of the block of shared memory relative to the process, and then (3) performing an operation on the specific portion of the block of shared memory based at least in part on the local pointer. Various other systems, methods, and computer-readable media are also disclosed.Type: GrantFiled: May 10, 2020Date of Patent: April 19, 2022Assignee: Juniper Networks, IncInventors: Erin C. MacNeil, Amit Kumar Rao, Finlay Michael Graham Pelley
-
Patent number: 11301132Abstract: One or more usage parameter values are received from a host system. The one or more parameter values correspond to one or more operations performed at the memory sub-system. Based on the one or more usage parameter values, a first expected time period is determined during which a first set of subsequent host data will be received from the host system and a second expected time period is determined during which a second set of subsequent host data will be received from the host system. A media management operation is scheduled to be performed between the first expected time period and the second expected time period.Type: GrantFiled: August 30, 2019Date of Patent: April 12, 2022Assignee: MICRON TECHNOLOGY, INC.Inventors: Poorna Kale, Ashok Sahoo
-
Patent number: 11301375Abstract: Memory reclamation is tailored to avoid certain synchronization instructions, speeding concurrent garbage collection while preserving data integrity and availability. Garbage collection reclaims objects no longer in use, or other unused areas of memory. Pointers are partitioned into address portions holding address values and non-address portions having a special bit. Marking code writes only the non-address portions, setting the special bit as a mark reference, relocation candidate, etc. Mutator threads may concurrently mutate the entire pointer to update the address, but mutation does not cause incorrect reclamations or failure to do other operations such as relocation. Meanwhile, execution speed is increased by avoiding CAS (compare-and-swap instructions or compare-and-set) synchronization instructions. Non-CAS yet nonetheless atomic writes are used instead. Mutators run in user or kernel address spaces.Type: GrantFiled: September 12, 2020Date of Patent: April 12, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Maoni Zhang Stephens, Patrick Henri Dussud
-
Patent number: 11295262Abstract: A system for fully integrated predictive decision-making and simulation having a high-volume deep web scraper system, a data retrieval engine, a directed computational graph module, and a decision and action path simulation engine.Type: GrantFiled: October 30, 2020Date of Patent: April 5, 2022Assignee: QOMPLX, INC.Inventors: Jason Crabtree, Andrew Sellers
-
Patent number: 11295862Abstract: An application server predicts respiratory disease risk, rescue medication usage, exacerbation, and healthcare utilization using trained predictive models. The application server includes model modules and submodel modules, which communicate with a database server, data sources, and client devices. The submodel modules train submodels by determining submodel coefficients based on training data from the database server. The submodel modules further determine statistical analysis data and estimates for medication usage events, healthcare utilization, and other related events. The model modules combine submodels to predict respiratory disease risk, exacerbation, rescue medication usage, healthcare utilization, and other related information. Model outputs are provided to users, including patients, providers, healthcare companies, electronic health record systems, real estate companies and other interested parties.Type: GrantFiled: June 17, 2020Date of Patent: April 5, 2022Assignee: Reciprocal Labs CorporationInventors: Guangquan Su, Meredith Ann Barrett, Olivier Humblet, Chris Hogg, John David Van Sickle, Kelly Anne Henderson, Gregory F. Tracy
-
Patent number: 11281379Abstract: An operating method of a storage device which includes one or more nonvolatile memories includes storing reference data in a first memory area of the one or more nonvolatile memories, when an access frequency of the reference data exceeds a first reference value, storing first replicated data identical to the reference data in a second memory area of the one or more nonvolatile memories, after the first replicated data are stored, when an access frequency of the reference data or the first replicated data exceeds the first reference value, storing second replicated data identical to the reference data in a third memory area of the one or more nonvolatile memories, and managing a second and a third physical addresses of the second and the third memory areas such that a first physical address of the first memory area corresponds to the second and the third physical addresses.Type: GrantFiled: May 15, 2020Date of Patent: March 22, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Shiva Pahwa
-
Patent number: 11281375Abstract: Intelligently compressing data in a storage array that includes a plurality of storage devices, including: prioritizing, in dependence upon an expected benefit to be gained from compressing each data element, one or more data elements; receiving an amount of processing resources available for compressing the one or more of the data elements; and selecting, in dependence upon the prioritization of the one or more data elements and the amount of processing resources available for compressing one or more of the data elements, a data compression algorithm to utilize on one or more of the data elements.Type: GrantFiled: June 28, 2019Date of Patent: March 22, 2022Assignee: Pure Storage, Inc.Inventors: Christopher Golden, Richard Hankins, Aswin Karumbunathan, Naveen Neelakantam, Neil Vachharajani
-
Patent number: 11275685Abstract: A computer-implemented method of optimizing data rollback is disclosed. The method receives a request to perform a task on a disk storage. The method initiates the task by reading a plurality of data pages from the disk storage to a database buffer. Each of the plurality of data pages on the database buffer are modified to form a plurality of dirty pages. In response to reaching and/or exceeding a database buffer threshold, a portion of the plurality of dirty pages on the database buffer are externalized to a rollback buffer. In response to reaching and/or exceeding a rollback buffer threshold, a subset of the portion of the plurality of dirty pages on the rollback buffer are externalized to the disk storage. The method detects a task cancelling activity prior to completion of the task; and performs a rollback of the plurality of dirty pages to a pre-task state.Type: GrantFiled: September 11, 2020Date of Patent: March 15, 2022Assignee: Kyndryl, Inc.Inventors: Sriram Lakshminarasimhan, Prasanna Veeraraghavan, Chandan Kumar Vishwakarma, Sundar Sarangarajan
-
Patent number: 11269784Abstract: Systems and methods are provided for efficiently managing a cache in a distributed environment. When entries are written into a cache, the entries include dependency information. The distributed system keeps invalidation entries that keep track of what dependent values have change and enforces invalidation of entries on cache reads. An asynchronous process actively invalidates entries and garbage collects the invalidation entries. The distributed system advantageously allows writing and reading cached entries across service boundaries.Type: GrantFiled: June 27, 2019Date of Patent: March 8, 2022Assignee: Amazon Technologies, Inc.Inventor: Bradford William Siemssen
-
Patent number: 11269527Abstract: Concepts for remote storage of data are presented. Once such concept is a system comprising: a primary storage controller; and a secondary storage controller of a remote data storage system. The primary storage controller is configured to determine a service characteristic of data storage to or data retrieval from the remote data storage system and to communicate service performance signals to the secondary storage controller based on the determined service characteristic. The secondary storage controller is configured to receive service performance signals from the primary storage controller, to compare the received service performance signals with a service requirement so as to determine a service comparison result, and to control data storage to or data retrieval from the remote data storage system based on the service comparison result.Type: GrantFiled: August 8, 2019Date of Patent: March 8, 2022Assignee: International Business Machines CorporationInventors: Miles Mulholland, Alex Dicks, Dominic Tomkins, Eric John Bartlett
-
Patent number: 11263131Abstract: Embodiments of the disclosure provide systems and methods for allocating memory space in a memory device. The system can include: a memory device for providing the memory space; and a compiler component configured for: receiving a request for allocating a data array having a plurality of data elements in the memory device, wherein each of the plurality of data elements has a logical address; generating an instruction for allocating memory space for the data array in the memory device based on the request; generating device addresses for the plurality of data elements in the memory device based on logical addresses of the plurality of data elements; and allocating the memory space for the data array in the memory device based on the device addresses and the instruction.Type: GrantFiled: April 8, 2020Date of Patent: March 1, 2022Assignee: ALIBABA GROUP HOLDING LIMITEDInventors: Shuangchen Li, Dimin Niu, Fei Sun, Jingjun Chu, Hongzhong Zheng, Guoyang Chen, Yingmin Li, Weifeng Zhang, Xipeng Shen
-
Patent number: 11256435Abstract: A method for performing data-accessing management in a storage server and associated apparatus such as a host device, a storage device, etc. are provided. The method includes: in response to a client request of writing a first set of data into the storage server, utilizing the host device within the storage server to trigger broadcasting an internal request corresponding to the client request toward each storage device of a plurality of storage devices within the storage server; and in response to the internal request corresponding to the client request, utilizing said each storage device of the plurality of storage devices to search for the first set of data in said each storage device to determine whether the first set of data has been stored in any storage device, for controlling the storage server completing the client request without duplication of the first set of data within the storage server.Type: GrantFiled: November 30, 2020Date of Patent: February 22, 2022Assignee: Silicon Motion, Inc.Inventors: Tsung-Chieh Yang, Wen-Long Wang
-
Patent number: 11256626Abstract: Apparatus, method, and system for enhancing data prefetching based on non-uniform memory access (NUMA) characteristics are described herein. An apparatus embodiment includes a system memory, a cache, and a prefetcher. The system memory includes multiple memory regions, at least some of which are associated with different NUMA characteristic (access latency, bandwidth, etc.) than others. Each region is associated with its own set of prefetch parameters that are set in accordance to their respective NUMA characteristics. The prefetcher monitors data accesses to the cache and generates one or more prefetch requests to fetch data from the system memory to the cache based on the monitored data accesses and the set of prefetch parameters associated with the memory region from which data is to be fetched. The set of prefetcher parameters may include prefetch distance, training-to-stable threshold, and throttle threshold.Type: GrantFiled: April 1, 2020Date of Patent: February 22, 2022Assignee: Intel CorporationInventors: Wim Heirman, Ibrahim Hur, Ugonna Echeruo, Stijn Eyerman, Kristof Du Bois
-
Patent number: 11249664Abstract: Methods, apparatus and systems for data storage devices that include non-volatile memory (NVM) are described. One such apparatus includes a non-volatile memory, a data storage device controller configured to receive a command from a host device, and wherein the data storage device controller comprises a file system analyzer comprising a determination circuit configured to determine based on the command from the host device whether a logical block address (LBA) referenced in the command is part of a known file extent, and a selection circuit configured to select a flash translation layer (FTL) workflow for the file extent in response to the determination that the LBA referenced in the command is part of the known file extent.Type: GrantFiled: June 25, 2019Date of Patent: February 15, 2022Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Judah Gamliel Hahn, Vinay Vijendra Kumar Lakshmi
-
Patent number: 11244247Abstract: Embodiments of the present invention are directed to facilitating concurrent forecasting associating with multiple time series data sets. In accordance with aspects of the present disclosure, a request to perform a predictive analysis in association with multiple time series data sets is received. Thereafter, the request is parsed to identify each of the time series data sets to use in predictive analysis. For each time series data set, an object is initiated to perform the predictive analysis for the corresponding time series data set. Generally, the predictive analysis predicts expected outcomes based on the corresponding time series data set. Each object is concurrently executed to generate expected outcomes associated with the corresponding time series data set, and the expected outcomes associated with each of the corresponding time series data sets are provided for display.Type: GrantFiled: June 18, 2020Date of Patent: February 8, 2022Assignee: Splunk Inc.Inventors: Manish Sainani, Nghi Huu Nguyen, Zidong Yang
-
Patent number: 11243883Abstract: A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.Type: GrantFiled: May 22, 2020Date of Patent: February 8, 2022Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson, Timothy David Anderson, Kai Chirca
-
Patent number: 11226897Abstract: Disclosed herein are techniques for implementing hybrid memory modules with improved inter-memory data transmission paths. The claimed embodiments address the problem of implementing a hybrid memory module that exhibits improved transmission latencies and power consumption when transmitting data between DRAM devices and NVM devices (e.g., flash devices) during data backup and data restore operations. Some embodiments are directed to approaches for providing a direct data transmission path coupling a non-volatile memory controller and the DRAM devices to transmit data between the DRAM devices and the flash devices. In one or more embodiments, the DRAM devices can be port switched devices, with a first port coupled to the data buffers and a second port coupled to the direct data transmission path. Further, in one or more embodiments, such data buffers can be disabled when transmitting data between the DRAM devices and the flash devices.Type: GrantFiled: April 23, 2020Date of Patent: January 18, 2022Assignee: Rambus Inc.Inventor: Aws Shallal
-
Patent number: 11216379Abstract: A processor system includes a processor core, a cache, a cache controller, and a cache assist controller. The processor core issues a read/write command for reading data from or writing data to a memory. The processor core also outputs an address range specifying addresses for which the cache assist controller can return zero fill, e.g., an address range for the read/write command. The cache controller transmits a cache request to the cache assist controller based on the read/write command. The cache assist controller receives the address range output by the processor core and compares the address range to the cache request. If a memory address in the cache request falls within the address range, the cache assist controller returns a string of zeroes, rather than fetching and returning data stored at the memory address.Type: GrantFiled: July 29, 2020Date of Patent: January 4, 2022Assignee: Analog Devices International Unlimited CompanyInventors: Thirukumaran Natrayan, Saurbh Srivastava