Least Recently Used (lru) Patents (Class 711/160)
  • Patent number: 11876673
    Abstract: An example application programming interface (API) server device that distributes configuration data to managed network devices includes one or more processing units implemented in circuitry and configured to receive configuration data to be deployed to at least one of the managed network devices; store the configuration data to a configuration database; and send the configuration data to the at least one of the managed network devices. In this manner, the configuration data can be archived for later retrieval and analysis, e.g., to perform root cause analysis in the event of an error.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: January 16, 2024
    Assignee: Juniper Networks, Inc.
    Inventors: Prasad Miriyala, Michael Henkel, Iqlas M. Ottamalika
  • Patent number: 11860773
    Abstract: Systems, apparatuses, and methods related to memory access statistics monitoring are described. A host is configured to map pages of memory for applications to a number of memory devices coupled thereto. A first memory device comprises a monitoring component configured to monitor access statistics of pages of memory mapped to the first memory device. A second memory device does not include a monitoring component capable of monitoring access statistics of pages of memory mapped thereto. The host is configured to map a portion of pages of memory for an application to the first memory device in order to obtain access statistics corresponding to the portion of pages of memory upon execution of the application despite there being space available on the second memory device and adjust mappings of the pages of memory for the application based on the obtained access statistics corresponding to the portion of pages.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: January 2, 2024
    Assignee: Micron Technology, Inc.
    Inventor: David A. Roberts
  • Patent number: 11741114
    Abstract: Systems and methods are provided for handling sequence-dependent data as part of processing and/or analyzing large data sets in a distributed data processing environment. The distributed data processing environment can be suitable for handling data generated at a plurality of sites within a network of manufacturing sites. The systems and methods can allow for pre-processing of some values for sequence-dependent data. This can allow secondary aggregated values and/or secondary aggregated data sets to be generated from sequence-dependent data that can span multiple blocks or partitions. Pre-calculation of secondary aggregated values and/or secondary aggregated data sets for sequence-dependent data can allow the efficiencies of parallel or distributed computation to be at least partially retained while also allowing for desired processing of the sequence-dependent data.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: August 29, 2023
    Assignee: ExxonMobil Technology and Engineering Company
    Inventors: Michael A. Hayes, Jeffrey Ludwig, Christopher S. Gurciullo, Terry J. Hayman, Krit H. Petty, Steven J. Seastream
  • Patent number: 11636056
    Abstract: An apparatus including a plurality of set arbitration circuits and a die arbitration circuit. The set arbitration circuits may each be configured to receive first commands and second commands and comprise a bank circuit configured to queue bank data in response to client requests and a set arbitration logic configured to queue the second commands in response to the bank data. The die arbitration circuit may be configured to receive the commands from the set arbitration circuits and comprise a die-bank circuit configured to queue die data in response to the client requests and a die arbitration logic configured to queue the second commands in response to the die data. Queuing the bank data and the die data for the second commands may maintain an order of the client requests and prioritize the first commands corresponding to a current controller over the first commands corresponding to a non-current controller.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: April 25, 2023
    Assignee: Ambarella International LP
    Inventors: Manish Singh, Dingxin Jin
  • Patent number: 11625175
    Abstract: Techniques for a device with NUMA a memory architecture to migrate virtual resources between NUMA nodes to reduce resource contention between virtual resources running on the NUMA nodes. In some examples, the device monitors various metrics and/or operations of the NUMA nodes and/or virtual resources, and detect events that indicate that virtual resources running on a same NUMA node are contending, or are likely to contend, for computing resources of the NUMA node. Upon detecting such an event, the device may migrate a virtual resource from the NUMA node to another NUMA node on the device that has an availability of computing resources. The device may then migrate the virtual resource from the overcommitted NUMA node onto the NUMA node that has availability to run the virtual resource. In this way, devices may reduce resource contention among virtual resources running on a same NUMA node.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: April 11, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Nikolay Krasilnikov, Oleksii Tsai, Alexey Gadalin, Guy Parton, Anton Valter
  • Patent number: 11593017
    Abstract: An illustrative method includes an object retention management system establishing a retention policy for a bucket of an object-based storage system, detecting an operation that causes an object to be stored within the bucket, and applying, based on the detecting of the operation, the retention policy to the object, the retention policy preventing the object from being deleted or overwritten for a predefined time duration.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: February 28, 2023
    Assignee: Pure Storage, Inc.
    Inventors: Shao-Ting Chang, Nicholas Yang, Ronald Karr
  • Patent number: 11500847
    Abstract: Real-time forensic instrumentation comprising: a monitoring hook into the notification interface of an operating system; a forensic artifact filter to evaluate events received via the real-time monitoring hook to determine if an event represents a change to a forensic artifact; and a forensic interpreter subsystem to: based on the forensic artifact filter output, collect forensic metadata associated with the forensic artifact and apply a forensic analysis to the forensic artifact to generate a result; generate a forensically interpreted activity for the event, the forensically interpreted activity comprising the forensic metadata, the result of the forensic analysis and a description of a first activity by a user with respect to the forensic artifact; and store the forensically interpreted activity in a digital forensics store.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: November 15, 2022
    Assignee: OPEN TEXT HOLDINGS, INC.
    Inventors: Paul M. Shomo, Robert Batzloff
  • Patent number: 11461254
    Abstract: An apparatus including a plurality of set arbitration circuits and a die arbitration circuit. The set arbitration circuits may each be configured to receive first commands and second commands and comprise a bank circuit configured to queue bank data in response to client requests and a set arbitration logic configured to queue the second commands in response to the bank data. The die arbitration circuit may be configured to receive the commands from the set arbitration circuits and comprise a die-bank circuit configured to queue die data in response to the client requests and a die arbitration logic configured to queue the second commands in response to the die data. Queuing the bank data and the die data for the second commands may maintain an order of the client requests and prioritize the first commands corresponding to a current controller over the first commands corresponding to a non-current controller.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: October 4, 2022
    Assignee: Ambarella International LP
    Inventors: Manish Singh, Dingxin Jin
  • Patent number: 11188474
    Abstract: Apparatuses, systems, methods, and computer program products are disclosed for balanced caching. An input circuit receives a request for data of non-volatile storage. A balancing circuit determines whether to execute a request by directly communicating with one or more of a cache and a non-volatile storage based on a first rate corresponding to the cache and a second rate corresponding to the non-volatile storage. A data access circuit executes a request based on a determination made by a balancing circuit.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: November 30, 2021
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Arup De
  • Patent number: 11188504
    Abstract: An information management system can manage the removal of data block entries in a deduplicated data store using working copies of the data block entries residing in a local data store of a secondary storage computing device. The system can use the working copies to identify data blocks for removal. Once the deduplication database is updated with the changes to the working copies (e.g., using a transaction based update scheme), the system can query the deduplication database for the database entries identified for removal. Once identified, the system can remove the database entries identified for pruning and/or the corresponding deduplication data blocks from secondary storage.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: November 30, 2021
    Assignee: Commvault Systems, Inc.
    Inventors: Deepak Raghunath Attarde, Manoj Kumar Vijayan
  • Patent number: 11126355
    Abstract: A method, system, and computer program product manages a storage system. Writes to sections in solid state storage devices in endurance tiers in the storage system are monitored by a computer system over a period of time. Responsive to a write rate for the writes to a section in the sections in an current endurance tier in the endurance tiers exceeding a maximum recommended write rate for the current endurance tier during the period of time, data is moved from the section in the current endurance tier to a higher endurance tier in the endurance tiers having a higher maximum recommended write rate than the maximum recommended write rate for the current endurance tier.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: September 21, 2021
    Assignee: International Business Machines Corporation
    Inventors: Christopher C. Bode, Nathan B. Best, Abhishek Dhingra
  • Patent number: 11119984
    Abstract: An information management system can manage the removal of data block entries in a deduplicated data store using working copies of the data block entries residing in a local data store of a secondary storage computing device. The system can use the working copies to identify data blocks for removal. Once the deduplication database is updated with the changes to the working copies (e.g., using a transaction based update scheme), the system can query the deduplication database for the database entries identified for removal. Once identified, the system can remove the database entries identified for pruning and/or the corresponding deduplication data blocks from secondary storage.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: September 14, 2021
    Assignee: Commvault Systems, Inc.
    Inventors: Deepak Raghunath Attarde, Manoj Kumar Vijayan
  • Patent number: 10963381
    Abstract: Dynamic caching policies and/or dynamic purging policies are provided for modifying the entry and eviction of content to the cache (e.g., storage and/or memory) of a caching server based on the current and past cache performance and/or demand. The caching server may modify or replace a configured policy when cache performance is below one or more thresholds. Modifying the caching policy may change caching behavior of the caching server by changing the conditions that control the content that is entered into cache or the content that is deferred and not entered into cache after a request. This may include assigning different probabilities for entering the same content into cache based on different caching policies. Modifying the purging policy may change eviction behavior of the caching server by changing the conditions that control the cached content that is selected and removed from cache.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: March 30, 2021
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Marcel Eric Schechner Flores, Derrick Sawyer
  • Patent number: 10768829
    Abstract: The use of steaming functionality on a storage device may be optimized by performing a combination of stream and non-stream writes based on a size of the data being written to a given stream. For example, a method may comprise writing data associated with a plurality of files to a first set of one or more erase blocks, determining that an amount of data associated with a given one of the plurality of files in the first set of one or more erase blocks has reached a threshold, and moving the data associated with the given file from the first set of one or more erase blocks to a stream, the stream comprising a second set of one or more erase blocks on the storage device different from the first set of one or more erase blocks.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: September 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rajsekhar Das, Scott Chao-Chueh Lee, Chesong Lee, Tristan C. Griffith, William R. Tipton, Erik Schmidt
  • Patent number: 10754549
    Abstract: An append-only streams capability may be implemented that allows the host (e.g., the file system) to determine an optimal stream size based on the data to be stored in that stream. The storage device may expose to the host one or more characteristics of the available streams on the device, including but not limited to the maximum number of inactive and active streams on the device, the erase block size, the maximum number of erase blocks that can be written in parallel, and an optimal write size of the data. Using this information, the host may determine which particular stream offered by the device is best suited for the data to be stored.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: August 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bryan S. Matthew, Aaron W. Ogus, Vadim Makhervaks, Laura M. Caulfield, Rajsekhar Das, Scott Chao-Chueh Lee, Omar Carey, Madhav Pandya, Ioan Oltean, Garret Buban, Lee Prewitt
  • Patent number: 10613763
    Abstract: A memory device can include: a memory array arranged to store data lines; an interface that receives a first read command requesting bytes of data in a consecutively addressed order from a starting byte; a cache memory having a first buffer storing a first data line including the starting byte, and a second buffer storing a second data line, from the cache memory or the memory array; output circuitry that accesses data from the first buffer, and sequentially outputs each byte from the starting byte through a highest addressed byte of the first data line; and from the second buffer and sequentially outputs each byte from a lowest addressed byte of the second data line until the requested bytes of data have been output in order to execute the first read command, the contents of the first and second buffers being maintained in the cache memory.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: April 7, 2020
    Assignee: Adesto Technologies Corporation
    Inventor: Gideon Intrater
  • Patent number: 10552179
    Abstract: A method and apparatus of a device for resource management by using a hierarchy of resource management techniques with dynamic resource policies is described. The device terminates several misbehaving application programs when available memory on the device is running low. Each of those misbehaving application programs consumes more memory space than a memory consumption limit assigned to the application program. If available memory on the device is still low after terminating those misbehaving application programs, the device further sends memory pressure notifications to all application programs. If available memory on the device is still running low after sending the memory pressure notifications, the device further terminates background, idle, and suspended application programs. The device further terminates foreground application programs when available memory on the device is still low after terminating the background, idle, and suspended application programs.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: February 4, 2020
    Assignee: Apple Inc.
    Inventors: Andrew D. Myrick, Dmitriy B. Solomonov, Lionel D. Desai
  • Patent number: 10445293
    Abstract: An information management system can manage the removal of data block entries in a deduplicated data store using working copies of the data block entries residing in a local data store of a secondary storage computing device. The system can use the working copies to identify data blocks for removal. Once the deduplication database is updated with the changes to the working copies (e.g., using a transaction based update scheme), the system can query the deduplication database for the database entries identified for removal. Once identified, the system can remove the database entries identified for pruning and/or the corresponding deduplication data blocks from secondary storage.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: October 15, 2019
    Assignee: Commvault Systems, Inc.
    Inventors: Deepak Raghunath Attarde, Manoj Kumar Vijayan
  • Patent number: 10387398
    Abstract: Execution of a page flusher is initiated in an in-memory database system in which pages are loaded into memory and which has associated physical disk storage. Thereafter, the page flusher identifies pages that were last modified outside a pre-defined time window. The page flusher then flushes the identified modified pages to the physical disk storage.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: August 20, 2019
    Assignee: SAP SE
    Inventors: Dirk Thomsen, Werner Thesing
  • Patent number: 10380072
    Abstract: An information management system can manage the removal of data block entries in a deduplicated data store using working copies of the data block entries residing in a local data store of a secondary storage computing device. The system can use the working copies to identify data blocks for removal. Once the deduplication database is updated with the changes to the working copies (e.g., using a transaction based update scheme), the system can query the deduplication database for the database entries identified for removal. Once identified, the system can remove the database entries identified for pruning and/or the corresponding deduplication data blocks from secondary storage.
    Type: Grant
    Filed: March 17, 2014
    Date of Patent: August 13, 2019
    Assignee: Commvault Systems, Inc.
    Inventors: Deepak Raghunath Attarde, Manoj Kumar Vijayan
  • Patent number: 10360111
    Abstract: Execution of a page flusher is initiated in an in-memory database system in which pages are loaded into memory and having associated physical disk storage by a resource flush thread using a queue. Thereafter, pages are identified that have been loaded into the memory of the database system and which have been modified. These identified pages are to be flushed to the physical disk storage. Each page is assigned with a different ordered physical page number. These identified pages are added to the queue. Subsequently, asynchronous write I/O is triggered causing the identified pages to be flushed to the physical disk storage and stored in the physical disk storage according to their assigned physical page numbers such that, if at least one predetermined performance condition is met, a subset of the identified pages in the queue are flushed to physical disk storage.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: July 23, 2019
    Assignee: SAP SE
    Inventor: Dirk Thomsen
  • Patent number: 10324798
    Abstract: In one aspect, a method includes reading metadata for a logical unit (LU) to restore, restoring active read areas to the LU identified in the metadata and exposing the LU to a host after restoring the active read areas of the LU. In another aspect, an apparatus includes electronic hardware circuitry configured to reading metadata for a LU to restore, restoring active read areas to the LU identified in the metadata and exposing the LU to a host after restoring the active read areas of the LU. In a further aspect, an article includes a non-transitory computer-readable medium that stores computer-executable instructions. The instructions cause a machine to read metadata for a LU to restore, restore active read areas to the LU identified in the metadata and expose the LU to a host after restoring the active read areas of the LU.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: June 18, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Assaf Natanzon, Anestis Panidis
  • Patent number: 10310946
    Abstract: Execution of a page flusher is initiated in an in-memory database system in which pages are loaded into memory and having associated physical disk storage. Thereafter, pages are identified that have been loaded into the memory of the database system and which have been modified. These identified pages are to be flushed to the physical disk storage. Each page is assigned with a different ordered physical page number. Asynchronous write I/O is later triggered causing the identified pages to be flushed to the physical disk storage and stored in the physical disk storage according to their assigned physical page numbers.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: June 4, 2019
    Assignee: SAP SE
    Inventor: Dirk Thomsen
  • Patent number: 10235405
    Abstract: A distributed storage system may store data object instances in persistent storage and may store keymap information for those data object instances in a distributed hash table on multiple computing nodes. Each data object instance may include a composite key containing a user key. The keymap information for each data object instance may map the user key to a locator and the locator to the data object instance. A request to store or retrieve keymap information for a data object instance may be routed to a particular computing node based on a consistent hashing scheme in which a hash function is applied to a portion of the composite key of the data object instance. Thus, related entries may be clustered on the same computing nodes. The portion of the key to which the hash function is applied may include a pre-determined number of bits or be identified using a delimiter.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: March 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
  • Patent number: 10228394
    Abstract: A measurement system is provided that performs a qualified store algorithm. When performing the algorithm, the measurement system stores in memory digital data samples acquired during a time window while a qualification signal is valid, a preselected number of digital data samples acquired prior to and adjacent in time to the time window, and a preselected number of digital data samples acquired subsequent to and adjacent in time to the time window.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: March 12, 2019
    Assignee: Keysight Technologies, Inc.
    Inventor: Allen Montijo
  • Patent number: 10223256
    Abstract: A distributed parallel processing database that processes data in a Java environment allocates memory both on a Java heap and off a Java heap. The distributed parallel processing database includes multiple servers. Each server executes a Java virtual machine (JVM) in which data allocated to the server is processed. When a JVM of a server starts, the JVM can specify an off-heap memory size, based on a JVM start parameter. The server can designate memory of the specified size that is off JVM memory heap as off-heap memory. The off-heap memory is different from heap memory in the Java environment, and is managed by a garbage collector that is outside of the Java environment. The server can process data designated as off-heap memory eligible in the off-heap memory. The off-heap memory can improve database operations that create a large number of similar-sized objects in memory by reducing Java memory management overhead.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: March 5, 2019
    Assignee: Pivotal Software, Inc.
    Inventors: Darrel Scott Schneider, Hitesh Khamesra, Asif Hussain Shahid, Jagannathan Ramnarayanan, Sudhir Menon, Kirk Van Lund, Lynn Gallinat
  • Patent number: 10068168
    Abstract: An IC card has a data storage, a table storage and a processing unit. The data storage stores data. The table storage stores a data element table including profile information including, in association with each other: a profile identifier for identifying a profile that is a group (set) of data elements to be stored in the data storage, at the time of issuance; the data elements included in the profile; and data region identifiers indicating data regions that are reserved in the data storage to store the data elements. The processing unit stores, in the data region indicated by the data region identifier corresponding to the data elements, the data elements corresponding to the profile identifier, from the data element table stored in the table storage, in response to a processing request that includes the profile identifier and requests issuance processing.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: September 4, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Yusuke Tsuda
  • Patent number: 10051057
    Abstract: A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and processing circuitry. The processing circuitry is configured to execute the operational instructions to perform various operations and functions. The computing device identifies slice error(s) associated with first storage unit(s) (SU(s)) of a first storage set that distributedly store a set of encoded data slices (EDSs) and second SU(s) of a second storage set. The computing device determines usage priority level(s) of the first SU(s) or the second SU(s) based on the slice error(s) and produces a selected storage set from the first SU(s) and the second SU(s) based on a more favorable usage priority level of the usage priority level(s) and facilitates execution of data access to at least the decode threshold number of EDSs based on the selected storage set.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: August 14, 2018
    Assignee: International Business Machines Corporation
    Inventors: Thomas D. Cocagne, Jason K. Resch, Greg R. Dhuse
  • Patent number: 9942324
    Abstract: A method implemented by a network element (NE) in a network, comprising composing a first network storage entity by mapping a plurality of logical storage units to a plurality of physical storage units in a physical storage system according to a first storage metric associated with the plurality of physical storage units, arranging the plurality of logical storage units sequentially to form a logical circular buffer, and designating a current logical storage unit for writing data and an upcoming logical storage unit for writing data after the current storage unit is fully written, and rebalancing the physical storage system while the physical storage system is actively performing network storage operations by relocating at least one of the logical storage units to a different physical storage unit according to a second storage metric associated with the plurality of physical storage units.
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: April 10, 2018
    Assignee: Futurewei Technologies, Inc.
    Inventors: Masood Mortazavi, Chi Young Ku, Guangyu Shi, Stephen Morgan
  • Patent number: 9928173
    Abstract: Determining, by a processor having a cache, if data in the cache is to be monitored for cache coherency conflicts in a transactional memory (TM) environment. A processor executes a TM transaction, that includes the following. Executing a memory data access instruction that accesses an operand at an operand memory address. Based on either a prefix instruction associated with the memory data access instruction, or an operand tag associated with the operand of the memory data access instruction, determining whether a cache entry having the operand is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction. Based on determining that the cache entry is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction, marking the cache entry for monitoring for conflicts.
    Type: Grant
    Filed: August 19, 2015
    Date of Patent: March 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9817874
    Abstract: Disclosed herein are system, method, and computer program product embodiments for providing a spatio-temporal index for high-update workloads and query processing. An embodiment operates by a first thread retrieving an update record from a first queue, the update record comprising a location component and a temporal component indicating a location of one of a plurality of mobile devices at a specified time, and updating a columnar-store database with the update record. The embodiment further operates by a second thread identifying a spatial grid of a spatial temporal index within a memory corresponding to the location component of the update record, and updating a temporal index of the spatial grid based on the temporal component of the update record.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: November 14, 2017
    Assignee: SAP SE
    Inventors: Suprio Ray, Rolando Blanco, Anil Kumar Goel
  • Patent number: 9792227
    Abstract: Inventive aspects include a heterogeneous unified memory section, which includes an extended unified memory space across a plurality of physical heterogeneous memory modules. A cold page reclamation logic section can receive and prioritize cold pages from a system memory. The cold pages can include a first subset of memory pages having a first type of memory data and a second subset of memory pages having a second type of memory data. For example, the cold pages can include anon-type memory pages and file-type memory pages. A dynamic tuning logic section can manage space allocation within the extended unified memory space. An intelligent page sort logic section can distribute the cold pages among different pools of physical heterogeneous memory modules based on varying characteristics of the pools, and based on the assigned priorities.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: October 17, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yang Seok Ki, Sheng Qiu
  • Patent number: 9720847
    Abstract: A method and apparatus for calculating a victim way that is always the least recently used way. More specifically, in an m-set, n-way set associative cache, each way in a cache set comprises a valid bit that indicates that the way contains valid data. The valid bit is set when a way is written and cleared upon being invalidated, e.g., via a snoop address, The cache system comprises a cache LRU circuit which comprises an LRU logic unit associated with each cache set. The LRU logic unit comprises a FIFO of n-depth (in certain embodiments, the depth corresponds to the number of ways in the cache) and m-width. The FIFO performs push, pop and collapse functions. Each entry in the FIFO contains the encoded way number that was last accessed.
    Type: Grant
    Filed: July 17, 2013
    Date of Patent: August 1, 2017
    Assignee: NXP USA, INC.
    Inventors: Thang Q. Nguyen, John D. Coddington, Sanjay R. Deshpande
  • Patent number: 9696932
    Abstract: Guaranteeing space availability for thin devices includes reserving space without committing, or fully pre-allocating, the space to specific thin device ranges. Space may be held in reserve for a particular set of thin devices and consumed as needed by those thin devices. The system guards user-critical devices from running out of space, for example due to a “rogue device” scenario in which one device allocates an excessive amount of space. The system uses a reservation entity, to which a thin device may subscribe, which reserves space for the thin device without allocating that space before it is need to service an I/O request.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: July 4, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Igor Fradkin, Alexandr Veprinsky, John Fitzgerald, Magnus E. Bjornsson
  • Patent number: 9471364
    Abstract: Embodiments of the present invention provide a virtual machine specification adjustment method and apparatus, where the virtual machine specification adjustment method includes: acquiring running status information of a virtual machine; determining, according to the running status information of the virtual machine, whether the virtual machine is a to-be-adjusted virtual machine; and if the virtual machine is a to-be-adjusted virtual machine, adjusting a specification of the to-be-adjusted virtual machine by using a resource in a reserved resource pool. By using the technical solutions of the present invention, efficiency of virtual machine specification adjustment is improved, thereby increasing a resource utilization rate of a data center.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: October 18, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Lijun Yan
  • Patent number: 9323318
    Abstract: One or more techniques and/or systems are provided for dynamically applying power policies to a computing environment. For example, a computing environment may comprise one or more activity components (e.g., a display driver, an audio driver, an application, etc.) that may provide status information used to identify a scenario (e.g., a video game scenario, a full screen video playback scenario, etc.) that is activated for the computing environment. A power policy assigned to a currently identified scenario may be applied to the computing environment to dynamically improve performance and/or power conservation, for example. Activity components, scenarios, and/or power policies may be maintained in an extensible manner such that activity components, scenarios, and/or power polices may be added, removed, and/or modified by merely updating corresponding data structures, such as tables or registry keys, as opposing to updating power management software code.
    Type: Grant
    Filed: June 11, 2013
    Date of Patent: April 26, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Abhishek Sagar, Tristan Anthony Brown
  • Patent number: 9262336
    Abstract: Embodiments of the invention describe an apparatus, system and method for utilizing a page miss handler having wear leveling logic/modules for memory devices. Embodiments of the invention may track an amount of writes directed towards cells of a memory device, and determine whether a linear address specified by a system write transaction is included in a translation-lookaside buffer (TLB). In response to determining the linear address is not included in the TLB, resulting in a TLB miss, embodiments of the invention may perform a page table walk to obtain a corresponding physical address, and convert the physical address to a device address for accessing the memory device based the tracked amount of writes. Thus, embodiments of the invention are more efficient compared to prior art solutions, as instead of all memory operations, only those that miss in the TLB incur additional wear leveling address translation overhead.
    Type: Grant
    Filed: December 23, 2011
    Date of Patent: February 16, 2016
    Assignee: Intel Corporation
    Inventors: Nevin Hyuseinova, Qiong Cai
  • Patent number: 9235506
    Abstract: A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache, and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: January 12, 2016
    Assignee: Nimble Storage, Inc.
    Inventors: Umesh Maheshwari, Varun Mehta
  • Patent number: 9223722
    Abstract: Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: December 29, 2015
    Assignee: VMware, Inc.
    Inventors: Carl A. Waldspurger, Rajesh Venkatasubramanian, Alexander Thomas Garthwaite, Yury Baskakov, Puneet Zaroo
  • Patent number: 9223693
    Abstract: A flash memory system having unequal number of memory die and method for operation are disclosed. The memory system includes a plurality of flash memory die distributed unevenly among different control lines, such that there are an unequal number of die between control lines. A total physical capacity of the plurality of flash memory die is greater than a total logical capacity such that the memory system is over provisioned with physical capacity. A logical address splitter directs data received from a host system and associated with host logical block addresses such that each control line only receives data associated with predetermined host logical block address ranges and directs the data such that a ratio of physical capacity to logical capacity is equal among each of the control lines, regardless of the different number of die and associated different physical capacity per control line.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: December 29, 2015
    Assignee: SanDisk Technologies Inc.
    Inventors: Alan Welsh Sinclair, Nicholas James Thomas, Barry Wright
  • Patent number: 9195582
    Abstract: A data storage method applied to a flash memory storage device is provided. The method includes: identifying a first tag pointing to a storage unit storing a first data, the first data being a newly updated data; locating the storage unit storing the first data according to the first tag; storing a second data to another storage unit; pointing the first tag to the another storage unit storing the second data. A relationship between the first tag and the storage unit storing the first data is first built. The second data is stored to another storage unit different from the storage unit pointed by the first tag, and a relationship between the first tag and the another storage unit storage the second data is rebuilt. Therefore, data is efficiently stored by using a plurality of storage units to prolong a lifespan of the flash memory.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: November 24, 2015
    Assignee: MStar Semiconductor, Inc.
    Inventors: Rui-qing Wang, Da-teng Li, Wei Wu
  • Patent number: 9164950
    Abstract: A method, system, and computer program product for displaying components assigned to events produced by resources, includes: registering a new resource by generating a label to which is associated a system specific device identifier used by the new resource within a computing environment; storing in a mapping table the generated label identifying the registered new resource together with an associated system specific device identifier; and updating the mapping table by associating to the label any other system specific device identifier used by the new resource within the computing environment; receiving events produced by resources when being executed within the computing environment, each event being associated with a list of labels for the resources relevant for the generation of the event; and maintaining a tag cloud including different tags, the different tags including labels for the resources associated with the received events to be displayed as components.
    Type: Grant
    Filed: May 10, 2013
    Date of Patent: October 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Oliver Augenstein, Joerg Erdmenger, Hans-Ulrich Oldengott, Thomas Prause, Martin Raitza
  • Patent number: 9152572
    Abstract: Some implementations disclosed herein provide techniques and arrangements for an specialized logic engine that includes translation lookaside buffer to support multiple threads executing on multiple cores. The translation lookaside buffer enables the specialized logic engine to directly access a virtual address of a thread executing on one of the plurality of processing cores. For example, an acceleration compute engine may receive one or more instructions from a thread executed by a processing core. The acceleration compute engine may retrieve, based on an address space identifier associated with the one or more instructions, a physical address associated with the one or more instructions from the translation lookaside buffer to execute the one or more instructions using the physical address.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: October 6, 2015
    Assignee: Intel Corporation
    Inventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann, Karthikeyan Vaithianathan
  • Patent number: 9081623
    Abstract: Disclosed are various embodiments for a resource allocation application. Usage data for application program interfaces is aggregated over time. Limits for an allocation of resources for each of the application program interfaces are calculated as a function of the usage data. Limits are recalculated as new application program interfaces are added.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: July 14, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Joseph Magerramov, Ganesh Subramaniam
  • Patent number: 9032168
    Abstract: Memory management methods and systems for mobile devices are provided. A memory usage of a memory is monitored by a built-in memory management component of an OS of the device and a user-oriented memory management component. It is determined whether the memory usage of the memory is greater than a first threshold or a second threshold, wherein the second threshold is greater than the first threshold. When the memory usage of the memory is greater than the first threshold, a multi-level memory management is performed by the user-oriented memory management component. When the memory usage of the memory is greater than the second threshold, a primitive memory management is performed by the built-in memory management component.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: May 12, 2015
    Assignee: HTC Corporation
    Inventors: Wen-Yen Chang, Chih-Tsung Wu, Kao-Pin Chen, Ting-Lun Chen
  • Patent number: 9021185
    Abstract: A memory controller and methods for managing efficient writing to a flash memory are presented. Fresh data is written to at least one block of the flash memory. During a space reclamation process, other data, previously written to the flash memory, is relocated to at least one other block of the flash memory, such that the fresh data and the relocated data always are maintained in separate blocks of the flash memory. During writing, an update frequency level is selected for the fresh data from among multiple update frequency levels and the fresh data is written to a block that is associated with the selected update frequency level. During space reclamation, a plurality of blocks, space of which is to be reclaimed, is selected and the valid pages thereof are copied to at least one destination block.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: April 28, 2015
    Inventor: Amir Ban
  • Patent number: 9015419
    Abstract: Embodiments relate to a transactional read footprint after a cache line eviction. An aspect includes executing one or more read instructions in an active transaction. A cross invalidate (XI) request for a target cache line is received, and it is determined if the target cache line is part of a congruence class in a local cache. It is further determined whether an extension flag associated with the congruence class is set. The extension flag is used to indicate that cache lines of the congruence class associated with the active transaction have been replaced based only on being least recently used and that the target cache line is not in the cache. Execution of the active transaction continues based on determining that the extension flag is not set. Execution of the active transaction is aborted based on determining that the extension flag is set.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: April 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, Jonathan T. Hsieh, Christian Jacobi
  • Patent number: 9009403
    Abstract: A control unit of a least recently used (LRU) mechanism for a ternary content addressable memory (TCAM) stores counts indicating a time sequence with resources in entries of the TCAM. The control unit receives an access request with a mask defining related resources. The TCAM is searched to find partial matches based on the mask. The control unit increases the counts for entries corresponding to partial matches, preserving an order of the counts. If the control unit also finds an exact match, its count is updated to be greater than the other increased counts. After each access request, the control unit searches the TCAM to find the entry having the lowest count, and writes the resource of that entry to an LRU register. In this manner, the system software can instantly identify the LRU entry by reading the value in the LRU register.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventor: Noriaki Asamoto
  • Patent number: 9009401
    Abstract: A control unit of a least recently used (LRU) mechanism for a ternary content addressable memory (TCAM) stores counts indicating a time sequence with resources in entries of the TCAM. The control unit receives an access request with a mask defining related resources. The TCAM is searched to find partial matches based on the mask. The control unit increases the counts for entries corresponding to partial matches, preserving an order of the counts. If the control unit also finds an exact match, its count is updated to be greater than the other increased counts. After each access request, the control unit searches the TCAM to find the entry having the lowest count, and writes the resource of that entry to an LRU register. In this manner, the system software can instantly identify the LRU entry by reading the value in the LRU register.
    Type: Grant
    Filed: July 27, 2012
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventor: Noriaki Asamoto
  • Patent number: 8996759
    Abstract: A multi-chip memory device and a method of controlling the same are provided. The multi-chip memory device includes a first memory chip; and a second memory chip sharing an input/output signal line with the first memory chip, wherein each of the first memory chip and the second memory chip determines whether to execute a command unaccompanied by an address, by referring to a history of commands.
    Type: Grant
    Filed: November 14, 2011
    Date of Patent: March 31, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hoiju Chung