Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Patent number: 11977765
    Abstract: The functions of a mainframe environment are expanded by leveraging the functions of an open environment.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: May 7, 2024
    Assignee: Hitachi, Ltd.
    Inventors: Naoyuki Masuda, Ryusuke Ito, Kenichi Oyamada, Yuri Hiraiwa, Goro Kazama, Yunde Sun, Ryosuke Kodaira
  • Patent number: 11941250
    Abstract: A process includes determining a memory bandwidth of a processor subsystem corresponding to an execution of an application by the processor subsystem. The process includes determining an average memory latency corresponding to the execution of the application and determining an average occupancy of a miss status handling register queue associated with the execution of the application based on the memory bandwidth and the average memory latency. The process includes, based on the average occupancy of the miss status handling register queue and a capacity of the miss status handling register queue, generating data that represents a recommendation of an optimization to be applied to the application.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: March 26, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Sanyam Mehta
  • Patent number: 11914525
    Abstract: In an example, an apparatus comprises a plurality of compute engines; and logic, at least partially including hardware logic, to detect a cache line conflict in a last-level cache (LLC) communicatively coupled to the plurality of compute engines; and implement context-based eviction policy to determine a cache way in the cache to evict in order to resolve the cache line conflict. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: February 14, 2023
    Date of Patent: February 27, 2024
    Assignee: INTEL CORPORATION
    Inventors: Neta Zmora, Eran Ben-Avi
  • Patent number: 11860787
    Abstract: Methods, devices, and systems for retrieving information based on cache miss prediction. A prediction that a cache lookup for the information will miss a cache is made based on a history table. The cache lookup for the information is performed based on the request. A main memory fetch for the information is begun before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations, the prediction includes comparing a first set of bits stored in the history table with a second set of bits stored in the history table. In some implementations, the prediction includes comparing at least a portion of an address of the request for the information with a set of bits in the history table.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: January 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ciji Isen, Paul J. Moyer
  • Patent number: 11842762
    Abstract: Disclosed is a memory system that has a memory controller and may have a memory component. The memory component may be a dynamic random access memory (DRAM). The memory controller is connectable to the memory component. The memory component has at least one data row and at least one tag row different from and associated with the at least one data row. The memory system is to implement a cache having multiple ways to hold a data group. The memory controller is operable in each of a plurality of operating modes. The operating modes include a first operating mode and a second operating mode. The first operating mode and the second operating mode have differing addressing and timing for accessing the data group. The memory controller has cache read logic that sends a cache read command, cache results logic that receives a response from the memory component, and cache fetch logic.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: December 12, 2023
    Assignee: Rambus Inc.
    Inventors: Frederick Ware, Thomas Vogelsang, Michael Raymond Miller, Collins Williams
  • Patent number: 11842059
    Abstract: A method includes accessing a first memory component of a memory sub-system via a first interface, accessing a second memory component of the memory sub-system via a second interface, and transferring data between the first memory component and the second memory component via the first interface. The method further includes initially writing data in the first memory component via a first address window and accessing data in the second memory component via a second address window in response to caching the data in first memory component to the second memory component, wherein caching the data in the first memory component to the second component includes changing an address for the data from the first address window to the second address window.
    Type: Grant
    Filed: September 1, 2021
    Date of Patent: December 12, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Robert M. Walker
  • Patent number: 11775434
    Abstract: The disclosed computer-implemented method may include receiving, from a host via a cache-coherent interconnect, a request to access an address of a coherent memory space of the host. When the request is to write data, the computer-implemented method may include (1) performing, after receiving the data, a post-processing operation on the data to generate post-processed data and (2) writing the post-processed data to a physical address of a device-attached physical memory mapped to the address. When the request is to read data, the computer-implemented method may include (1) reading the data from the physical address of a device-attached physical memory mapped to the address, (2) performing, before responding to the request, a pre-processing operation on the data to generate pre-processed data, and (3) returning the pre-processed data to the external host via the cache-coherent interconnect. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: October 3, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Narsing Krishna Vijayrao, Christian Markus Petersen
  • Patent number: 11748266
    Abstract: Embodiments are for special tracking pool enhancement for core L1 address invalidates. An invalidate request is designated to fill an entry in a queue in a local cache of a processor core, the queue including a first allocation associated with processing any type of invalidate request and a second allocation associated with processing an invalidate request not requiring a response in order for a controller to be made available, the entry being in the second allocation. Responsive to designating the invalidate request to fill the entry in the queue in the local cache, a state of the controller that made the invalidate request is changed to available based at least in part on the entry being in the second allocation.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: September 5, 2023
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Gregory William Alexander, Richard Joseph Branciforte, Aaron Tsai, Markus Kaltenbach
  • Patent number: 11709619
    Abstract: A data processing method includes receiving a message related to performance of a storage device, the message including an indicator value regarding the performance in a first time period, and a timestamp associated with the first time period. A status record of the storage device, including the number of received indicator values in a second time period including the first time period, is determined based on the timestamp, wherein the number of the received indicator values is less than a threshold number and can be updated based on the indicator value. The performance in the second time period can be determined based on the indicator value and the received indicator values in response to determining that the updated number of the received indicator values reaches the threshold number. Thus, the performance of the storage device can be quickly and accurately determined, and the consumption of computing resources is reduced.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: July 25, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Shijie Zhao, Colin Yuanfei Cai, Qirong Wang, Bei Gao
  • Patent number: 11681623
    Abstract: A pre-read data caching method and apparatus, a device, and a storage medium, the method including: receiving a read command for a target file; if determining that there is target pre-read data of the target file in a pre-read queue, then moving the pre-read data from the pre-read queue into a secondary cache queue; reading the target pre-read data in the secondary cache queue; and, after reading is complete, moving the target pre-read data from the secondary cache queue into a reset queue, the invalidation priority level of the pre-read queue being the lowest.
    Type: Grant
    Filed: January 23, 2021
    Date of Patent: June 20, 2023
    Assignee: GUANGDONG INSPUR SMART COMPUTING TECHNOLOGY CO., LTD.
    Inventors: Shuaiyang Wang, Wenpeng Li, Duan Zhang
  • Patent number: 11657002
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11615033
    Abstract: Systems, apparatuses, and methods for performing efficient translation lookaside buffer (TLB) invalidation operations for splintered pages are described. When a TLB receives an invalidation request for a specified translation context, and the invalidation request maps to an entry with a relatively large page size, the TLB does not know if there are multiple translation entries stored in the TLB for smaller splintered pages of the relatively large page. The TLB tracks whether or not splintered pages for each translation context have been installed. If a TLB invalidate (TLBI) request is received, and splintered pages have not been installed, no searches are needed for splintered pages. To refresh the sticky bits, whenever a full TLB search is performed, the TLB rescans for splintered pages for other translation contexts. If no splintered pages are found, the sticky bit can be cleared and the number of full TLBI searches is reduced.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: March 28, 2023
    Assignee: Apple Inc.
    Inventors: John D. Pape, Brian R. Mestan, Peter G. Soderquist
  • Patent number: 11604733
    Abstract: An apparatus has processing circuitry to perform data processing, at least one architectural register to store at least one partition identifier selection value which is programmable by software processed by the processing circuitry; a set-associative cache comprising a plurality of sets each comprising a plurality of ways; and partition identifier selecting circuitry to select, based on the at least one partition identifier selection value stored in the at least one architectural register, a selected partition identifier to be specified by a cache access request for accessing the set-associative cache.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: March 14, 2023
    Assignee: Arm Limited
    Inventor: Steven Douglas Krueger
  • Patent number: 11550620
    Abstract: Apparatuses and methods are disclosed for performing data processing operations in main processing circuitry and delegating certain tasks to auxiliary processing circuitry. User-specified instructions executed by the main processing circuitry comprise a task dispatch specification specifying an indication of the auxiliary processing circuitry and multiple data words defining a delegated task comprising at least one virtual address indicator. In response to the task dispatch specification the main processing circuitry performs virtual-to-physical address translation with respect to the at least one virtual address indicator to derive at least one physical address indicator, and issues a task dispatch memory write transaction to the auxiliary processing circuitry comprises the indication of the auxiliary processing circuitry and the multiple data words, wherein the at least one virtual address indicator in the multiple data words is substituted by the at least one physical address indicator.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: January 10, 2023
    Assignee: Arm Limited
    Inventors: Håkan Lars-Göran Persson, Frederic Claude Marie Piry, Matthew Lucien Evans, Albin Pierrick Tonnerre
  • Patent number: 11533580
    Abstract: A method includes determining a device location of an electronic device, and obtaining a content item to be output for display by the electronic device based on the device location, wherein the content item comprises coarse content location information and fine content location information. The method also includes determining an anchor in a physical environment based on the content item, determining a content position and a content orientation for the content item relative to the anchor based on the fine content location information, and displaying a representation of the content item using the electronic device using the content position and the content orientation.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: December 20, 2022
    Assignee: APPLE INC.
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Patent number: 11513965
    Abstract: A high bandwidth memory system. In some embodiments, the system includes: a memory stack having a plurality of memory dies and eight 128-bit channels; and a logic die, the memory dies being stacked on, and connected to, the logic die; wherein the logic die may be configured to operate a first channel of the 128-bit channels in: a first mode, in which a first 64 bits operate in pseudo-channel mode, and a second 64 bits operate as two 32-bit fine-grain channels, or a second mode, in which the first 64 bits operate as two 32-bit fine-grain channels, and the second 64 bits operate as two 32-bit fine-grain channels.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: November 29, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna T. Malladi, Mu-Tien Chang, Dimin Niu, Hongzhong Zheng
  • Patent number: 11509740
    Abstract: There is disclosed herein computer implemented methods of cache key generation including receiving from a user a request for content; wherein the request comprises one or more of opening a browser software tab or window, launching a software application, activating a hyperlink; wherein the request causes an electronic communications network connection to be established and/or an HTTP request made; and wherein, the surrogate passes the request to an origin.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: November 22, 2022
    Assignee: Cloudinary Ltd.
    Inventors: Colin Bendell, Itai Benari
  • Patent number: 11477188
    Abstract: Methods and systems for injection of tokens or certificates for managed application communication are described. A computing device may intercept a request from an application executable on the computing device, the request being to access a remote resource. The computing device may modify future network communications between the computing device and the remote resource to include a token or a client certificate, where the token or the client certificate is an identifier that enables the future network communications to be routed to the remote resource for a given computing session without use of data from the remote resource or data indicative of a connection of the remote resource in which to receive the future network communications. The computing device may send the future network communications to the remote resource to enable action to be taken on behalf of the computing device in response to receipt of the future network communications.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: October 18, 2022
    Assignee: Citrix Systems, Inc.
    Inventor: Thierry Duchastel
  • Patent number: 11461252
    Abstract: Disclosed herein is a redundancy resource comparator for a bus architecture of a memory device for comparing an address signal being received from an address signal bus and a redundancy address being stored in a latch of the memory device. Disclosed is also a corresponding bus architecture and comparison method.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: October 4, 2022
    Assignee: SK hynix Inc.
    Inventor: Simone Mazzucchelli
  • Patent number: 11436107
    Abstract: Examples described herein relate a method, a system, and a non-transitory machine-readable medium for restoring a computing resource. The method may include determining whether the computing resource is required to be restored on a recovery node using a backup of the computing resource stored in a backup storage node. A resource restore operation may be triggered on the recovery node in response to determining that the computing resource is required to be restored. The resource restore operation include copying a subset of the objects from the backup to the recovery node to form, from the subset of objects, a partial filesystem instance of the computing resource on the recovery node that is operable as a restored computing resource on the recovery node.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: September 6, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Syama Sundararao Nadiminti
  • Patent number: 11431765
    Abstract: A session migration-based scheduling method, where the method includes: receiving a service request from a terminal, where the service request is used to obtain target content required by the terminal; querying a target Internet Protocol (IP) address in a database based on the service request, where the target IP address is an IP address of a server in which the target content is located; determining the target IP address based on a candidate IP address fed back by the database; if the target IP address is different from an IP address of the first media server, determining, by the first media server, that the first media server is missing the target content; and sending the service request to a second media server, where an IP address of the second media server is the target IP address.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: August 30, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jinwen Yang, Like Jiang, Xiaojun Gu, Zhihang Lu, Yang Cao
  • Patent number: 11397690
    Abstract: A virtualized cache implementation solution, where a memory of a virtual machine stores cache metadata. The cache metadata includes a one-to-one mapping relationship between virtual addresses and first physical addresses. After an operation request that is delivered by the virtual machine and that includes a first virtual address is obtained, when the cache metadata includes a target first physical address corresponding to the first virtual address, a target second physical address corresponding to the target first physical address is searched for based on preconfigured correspondences between the first physical addresses and second physical addresses, and data is read or written from or to a location indicated by the target second physical address.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: July 26, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lina Lu, Xian Chen
  • Patent number: 11392298
    Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Sameh Gobriel, Tsung-Yuan C. Tai
  • Patent number: 11379367
    Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: July 5, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Hua Tan
  • Patent number: 11340822
    Abstract: A method includes obtaining data from a plurality of data sources associated with an n-gram indexing data structure and storing at least a portion of the obtained data in a first storage, the stored data comprising one or more n-gram strings. The method also includes estimating frequencies of occurrence of respective ones of the n-gram strings in the stored data, the estimated frequency of occurrence of a given n-gram string being based at least in part on a size of a given n-gram index in the n-gram indexing data structure corresponding to the given n-gram string. The method further includes, in response to detecting one or more designated conditions, selecting a portion of the stored data based at least in part on the estimated frequencies and moving the selected portion of the stored data from the first storage to a second storage having different read and write access times.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 24, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Sashka T. Davis, Kevin J. Arunski
  • Patent number: 10949117
    Abstract: The present disclosure includes apparatuses and methods related to direct data transfer in memory. An example apparatus can include a first number of memory devices coupled to a host via a respective first number of ports and a second number of memory devices coupled to the first number of memory device via a respective second number of ports, wherein first number of memory devices and the second number of memory devices are configured to transfer data based on a first portion of a command including instructions to read the data from first number of memory devices and send the data directly to the second number of devices and a second portion of the command that includes instructions to write the data to the second number of memory devices.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: March 16, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Frank F. Ross
  • Patent number: 10713167
    Abstract: An information processing apparatus includes a first memory and a processor coupled to the first memory. The processor is configured to acquire a first address in the first memory, at which an instruction included in a target program is stored. The processor is configured to simulate access to a second memory, such as a cache memory, corresponding to an access request for access to the first address on a basis of configuration information of the second memory. The processor is configured to generate first information, such as cache profile information, indicating whether the access to the second memory regarding the instruction is a hit or miss. The processor may be configured to acquire a number of cache misses for each of a plurality of pieces of arrangement information, and select a piece of arrangement information where the number of cache misses is smallest.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: July 14, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Masaki Arai
  • Patent number: 10671387
    Abstract: Embodiments relate to vector memory access instructions for big-endian (BE) element ordered computer code and little-endian (LE) element ordered computer code. An aspect includes determining a mode of a computer system comprising one of a BE mode and an LE mode. Another aspect includes determining a code type comprising one of BE code and LE code. Another aspect includes determining a data type of data in a main memory that is associated with the object code comprising one of BE data and LE data. Another aspect includes based on the mode, code type, and data type, inserting a memory access instruction into the object code to perform a memory access associated with the vector in the object code, such that the memory access instruction performs element ordering of elements of the vector, and data ordering within the elements of the vector, in accordance with the determined mode, code type, and data type.
    Type: Grant
    Filed: June 10, 2014
    Date of Patent: June 2, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Brett Olsson
  • Patent number: 10635614
    Abstract: An embedded system includes a program to be executed. The program is divided into overlays. The embedded system includes a processor configured to request one of the overlays. The requested overlay includes a segment of the program to be executed by the processor. The embedded system also includes a first level memory device coupled to the processor. The first level memory device stores less than all of the overlays of the program. The embedded system further includes a memory management unit coupled to the processor and the first level memory device. The memory management unit is configured to determine, based on a logical address provided by the processor, whether the requested overlay is stored in the first level memory device. The memory management unit is additionally configured to convert the logical address to a physical address when the requested overlay is stored in the first level memory device. The physical address points to the requested overlay.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: April 28, 2020
    Assignee: Macronix International Co., Ltd.
    Inventor: Yi Chun Liu
  • Patent number: 10387157
    Abstract: An instruction set conversion system and method is provided, which can convert guest instructions to host instructions for processor core execution. Through configuration, instruction sets supported by the processor core are easily expanded. A method for real-time conversion between host instruction addresses and guest instruction addresses is also provided, such that the processor core can directly read out the host instructions from a higher level cache, reducing the depth of a pipeline.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: August 20, 2019
    Assignee: SHANGHAI XINHAO MICROELECTRONICS CO. LTD.
    Inventor: Kenneth Chenghao Lin
  • Patent number: 10089196
    Abstract: A method for processing return entities associated with multiple requests in a single ISR (Interrupt Service Routine) thread, performed by one core of a processing unit of a host device, is introduced. Entities are removed from a queue, which are associated with commands issued to a storage device, and the removed entities are processed until a condition is satisfied.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: October 2, 2018
    Assignee: SHANNON SYSTEMS LTD.
    Inventors: Zhen Zhou, Xueshi Yang
  • Patent number: 9990207
    Abstract: A semiconductor device with improved operating speed is provided. A semiconductor device including a memory circuit has a function of storing a start-up routine in the memory circuit and executing the start-up routine, a function of operating the memory circuit as a buffer memory device after executing the start-up routine, and a function of loading the start-up routine into the memory circuit from outside before the semiconductor device is powered off.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: June 5, 2018
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Yoshiyuki Kurokawa
  • Patent number: 9792989
    Abstract: According to one embodiment, a memory system includes a nonvolatile memory, a command managing unit, a command issuing unit, a data control unit and a command monitoring unit. The command issuing unit issues a command received by the command managing unit to the nonvolatile memory. The data control unit controls a reading or writing of data to the nonvolatile memory. The command monitoring unit monitors the command managing unit and outputs a receipt signal to the data control unit when the command managing unit receives the command. The data control unit interrupts the reading or writing when receiving the receipt signal, issues the command from the command issuing unit to the nonvolatile memory, and resumes the reading or writing after issuing the command.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: October 17, 2017
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Tatsuhiro Suzumura
  • Patent number: 9600360
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9600361
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9384091
    Abstract: A memory 10 stores a data block comprising a plurality of data values DV. An error code, such as an error correction code ECC, is associated with the memory and has a value dependent upon the plurality of data values which form the data block stored within the memory. If a partial write is performed on a data block, then the ECC information becomes invalid and is marked with an ECC_invalid flag. The intent is avoiding the need to read all data values to compute the ECC and thus save time and energy. The memory may be a cache line 28 within a level 1 cache memory 10. Memory scrub control circuitry 38 performs periodic memory scrub operations which trigger flushing of partially written cache lines back to main memory.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: July 5, 2016
    Assignee: ARM Limited
    Inventor: Luc Orion
  • Patent number: 9323669
    Abstract: A computer-executable method, system, and computer program product for managing a data storage system, wherein the data storage system includes a cache and a data storage array, the computer-executable method, system, and computer program product comprising receiving initialization information, analyzing the initialization information to determine which portions of the data storage array related to the initialization information, and managing the data storage system based on the determined portion of the data storage array.
    Type: Grant
    Filed: December 31, 2013
    Date of Patent: April 26, 2016
    Assignee: EMC Corporation
    Inventors: Guido A. DiPietro, Michael J. Cooney, Gerald E. Cotter, Philip Derbeko
  • Patent number: 9043570
    Abstract: Methods and apparatuses for implementing a system cache with quota-based control. Quotas may be assigned on a group ID basis to each group ID that is assigned to use the system cache. The quota does not reserve space in the system cache, but rather the quota may be used within any way within the system cache. The quota may prevent a given group ID from consuming more than a desired amount of the system cache. Once a group ID's quota has been reached, no additional allocation will be permitted for that group ID. The total amount of allocated quota for all group IDs can exceed the size of system cache, such that the system cache can be oversubscribed. The sticky state can be used to prioritize data retention within the system cache when oversubscription is being used.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: May 26, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Patent number: 8990500
    Abstract: In an embodiment, an indicator is set to indicate that all of a plurality of most significant bytes of characters in a character array are zero. A first index and an input character are received. The input character comprises a first most significant byte and a first least significant byte. The first most significant byte is stored at a first storage location and the first least significant byte is stored at a second storage location, wherein the first storage location and the second storage location have non-contiguous addresses. If the first most significant byte does not equal zero, the indicator is set to indicate that at least one of a plurality of most significant bytes of the characters in the character array is non-zero. The character array comprises the first most significant byte and the first least significant byte.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Jeremy A. Arnold, Scott A. Moore, Gregory A. Olson, Eric J. Stec
  • Patent number: 8966204
    Abstract: Migrating data may include determining to copy a first data block in a first memory location to a second memory location and determining to copy a second data block in the first memory location to the second memory location based on a migration policy.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: February 24, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jichuan Chang, Justin James Meza, Parthasarathy Ranganathan
  • Patent number: 8954672
    Abstract: The present disclosure relates to a method and system for mapping cache lines to a row-based cache. In particular, a method includes, in response to a plurality of memory access requests each including an address associated with a cache line of a main memory, mapping sequentially addressed cache lines of the main memory to a row of the row-based cache. A disclosed system includes row index computation logic operative to map sequentially addressed cache lines of a main memory to a row of a row-based cache in response to a plurality of memory access requests each including an address associated with a cache line of the main memory.
    Type: Grant
    Filed: March 12, 2012
    Date of Patent: February 10, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel H. Loh, Mark D. Hill
  • Patent number: 8954677
    Abstract: In order to optimize efficiency of deserialization, a serialization cache is maintained at an object server. The serialization cache is maintained in conjunction with an object cache and stores serialized forms of objects cached within the object cache. When an inbound request is received, a serialized object received in the request is compared to the serialization cache. If the serialized byte stream is present in the serialization cache, then the equivalent object is retrieved from the object cache, thereby avoiding deserialization of the received serialized object. If the serialized byte stream is not present in the serialization cache, then the serialized byte stream is deserialized, the deserialized object is cached in the object cache, and the serialized object is cached in the serialization cache.
    Type: Grant
    Filed: June 15, 2014
    Date of Patent: February 10, 2015
    Assignee: Open Invention Network, LLC
    Inventors: Deren George Ebdon, Robert W. Peterson
  • Patent number: 8949530
    Abstract: Systems and methods are disclosed for improving the performance of cache memory in a computer system by dynamically selecting an index for caching main memory while an application is running. A disclosed example of a memory system includes a cache including a data array, a primary tag array, and at least one secondary tag array. A currently selected index is used to index data bits to the data array and tag bits to the primary tag array. The performance of at least one candidate index is evaluated by indexing tag bits to the secondary tag array, without caching any data using the candidate index while the candidate index is under evaluation. If the candidate index has a better hit rate than the currently selected index, the memory system switches to using the candidate index to cache data.
    Type: Grant
    Filed: August 2, 2011
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Mvv A. Krishna, Shaul Yifrach
  • Patent number: 8943271
    Abstract: Systems and methods that aggregate memory capacity of multiple computers into a single unified cache, via a layering arrangement. Such layering arrangement is scalable to a plurality of machines and includes a data manager component, an object manager component and a distributed object manager component, which can be implemented in a modular fashion. Moreover, the layering arrangement can provide for an explicit cache tier (e.g., cache-aside architecture) that applications are aware about, wherein decision are made explicitly which objects to put/remove in such applications (as opposed to an implicit cache wherein application do not know the existence of the cache).
    Type: Grant
    Filed: January 30, 2009
    Date of Patent: January 27, 2015
    Assignee: Microsoft Corporation
    Inventors: Muralidhar Krishnaprasad, Anil K. Nori, Subramanian Muralidhar
  • Patent number: 8935476
    Abstract: Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta, Paul H. Muench, Cheng-Chung Song
  • Patent number: 8935477
    Abstract: Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.
    Type: Grant
    Filed: February 26, 2013
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta, Paul H. Muench, Cheng-Chung Song
  • Patent number: 8930630
    Abstract: The present disclosure relates to a cache memory controller for controlling a set-associative cache memory, in which two or more blocks are arranged in the same set, the cache memory controller including a content modification status monitoring unit for monitoring whether some of the blocks arranged in the same set of the cache memory have been modified in contents, and a cache block replacing unit for replacing a block, which has not been modified in contents, if some of the blocks arranged in the same set have been modified in contents.
    Type: Grant
    Filed: September 2, 2009
    Date of Patent: January 6, 2015
    Assignee: Sejong University Industry Academy Cooperation Foundation
    Inventor: Gi Ho Park
  • Patent number: 8930626
    Abstract: A method and computer program product for dividing a cache memory system into a plurality of cache memory portions. Data to be written to a specific address within an electromechanical storage system is received. The data is assigned to one of the plurality of cache memory portions, thus defining an assigned cache memory portion. Association information for the data is generated, wherein the association information defines the specific address within the electromechanical storage system. The data and the association information is written to the assigned cache memory portion.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: January 6, 2015
    Assignee: EMC Corporation
    Inventors: Roy E. Clark, Kiran Madnani, David W. DesRoches
  • Patent number: 8924645
    Abstract: Data storage apparatus and methods are disclosed. A disclosed example data storage apparatus comprises a cache layer and a processor in communication with the cache layer. The processor is to dynamically enable or disable the cache layer via a cache layer enable line based on a data store access type.
    Type: Grant
    Filed: March 7, 2011
    Date of Patent: December 30, 2014
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Jichuan Chang, Parthasarathy Ranganathan, David Andrew Roberts, Mehul A. Shah, John Sontag
  • Patent number: 8909869
    Abstract: A controlling a cache memory includes: a data receiving unit to receive a sensor ID and data detected by the sensor; an attribute information acquiring unit to acquire attribute information corresponding to the sensor ID, from an attribute information memory, the attribute information memory storing the attribute information of the sensor mapped to the sensor ID; a sensor information memory to store information of a storage period, the sensor information memory including a cache memory storing the attribute information; and a cache memory control unit to acquire the attribute information from the attribute information acquiring unit when the attribute information is not stored in the cache memory, and store the acquired attribute information corresponding to the sensor ID in the cache memory during the storage period.
    Type: Grant
    Filed: August 26, 2010
    Date of Patent: December 9, 2014
    Assignee: Fujitsu Limited
    Inventor: Masahiko Murakami