Multiple Caches Patents (Class 711/119)
  • Patent number: 10007523
    Abstract: In a decode stage of hardware processor pipeline, one particular instruction of a plurality of instructions is decoded. It is determined that the particular instruction requires a memory access. Responsive to such determination, it is predicted whether the memory access will result in a cache miss. The predicting in turn includes accessing one of a plurality of entries in a pattern history table stored as a hardware table in the decode stage. The accessing is based, at least in part, upon at least a most recent entry in a global history buffer. The pattern history table stores a plurality of predictions. The global history buffer stores actual results of previous memory accesses as one of cache hits and cache misses.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: June 26, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vijayalakshmi Srinivasan, Brian R. Prasky
  • Patent number: 10007620
    Abstract: A processor includes a set associative cache and a cache controller. The cache controller makes an initial association between first and second groups of sampled sets in the cache and first and second cache replacement policies. Follower sets in the cache are initially associated with the more conservative of the two policies. Following cache line insertions in a first epoch, the associations between the groups of sampled sets and cache replacement policies are swapped for the next epoch. If the less conservative policy outperforms the more conservative policy during two consecutive epochs, the follower sets are associated with the less conservative policy for the next epoch. Subsequently, if the more conservative policy outperforms the less conservative policy during any epoch, the follower sets are again associated with the more conservative policy. Performance may be measured based the number of cache misses associated with each policy.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 26, 2018
    Assignee: Intel Corporation
    Inventors: Seth H. Pugsley, Christopher B. Wilkerson, Roger Gramunt, Jonathan C. Hall, Prabhat Jain
  • Patent number: 9996358
    Abstract: A system and method of coupling a Branch Target Buffer (BTB) content of a BTB with an instruction cache content of an instruction cache. The method includes: tagging a plurality of target buffer entries that belong to branches within a same instruction block with a corresponding instruction block address and a branch bitmap to indicate individual branches in the block; coupling an overflow buffer with the BTB to accommodate further target buffer entries of instruction blocks, distinct from the plurality of target buffer entries, which have more branches than the bundle is configured to accommodate in the corresponding instruction's bundle in the BTB; and predicting the instructions or the instruction blocks that are likely to be fetched by the core in the future and fetch those instructions from the lower levels of the memory hierarchy proactively by means of a prefetcher.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: June 12, 2018
    Assignee: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
    Inventors: Babak Falsafi, Ilknur Cansu Kaynak, Boris Robert Grot
  • Patent number: 9996298
    Abstract: A processor core of a data processing system, in response to a first instruction, generates a copy-type request specifying a source real address and transmits it to a lower level cache. In response to a second instruction, the processor core generates a paste-type request specifying a destination real address associated with a memory-mapped device and transmits it to the lower level cache. In response to receipt of the copy-type request, the lower level cache copies a data granule from a storage location specified by the source real address into a non-architected buffer. In response to receipt of the paste-type request, the lower level cache issues a command to write the data granule from the non-architected buffer to the memory-mapped device. In response to receipt from the memory-mapped device of a busy response, the processor core abandons the memory move instruction sequence and performs alternative processing.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: June 12, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Guy L. Guthrie, William J. Starke, Jeffrey A. Stuecheli, Derek E. Williams
  • Patent number: 9996467
    Abstract: Some embodiments provide a physical forwarding element that dynamically adjusts the number of flows allowed in a flow table cache. In adjusting, the physical forwarding element initially sets the maximum number of flows allowed in the cache. From the flow table cache, the physical forwarding then iterates through the set maximum number of flows and records the length of time it took to iterate through the flows. Based on the duration, the physical forwarding element then automatically adjusts the size of the flow table cache by increasing or decreasing the number of flows allowed in the cache. Alternatively, the physical forwarding element may choose to keep the cache size the same based on the duration.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: June 12, 2018
    Assignee: NICIRA, INC.
    Inventor: Ethan J. Jackson
  • Patent number: 9983766
    Abstract: Embodiments relate to systems and methods providing a flip-though format for viewing notification of messages and related items on devices, for example personal mobile devices such as smart phones. According to an embodiment, an unread item most recently received is shown in full screen on the mobile device. While the user is viewing this item, the device will automatically retrieve and load into a cache memory, the next most recently received item. When the user is done viewing the item most recently received, the user can swipe a finger across the touch screen to trigger a page flipping animation and display of the next most recently received item. Embodiments avoid the user having to click back and forth between a list of notifications/links and corresponding notification items.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: May 29, 2018
    Assignee: SAP SE
    Inventor: Jian Xu
  • Patent number: 9959916
    Abstract: A dual rail memory operable at a first voltage and a second voltage, the dual rail memory includes: a memory array operates at the first voltage; a word line driver circuit configured to drive a word line of the memory array to the first voltage; a data path configured to transmit an input data signal or an output data signal; and a control circuit configured to generate control signals to the memory array, the word line driver circuit and the data path; wherein the data path and the control circuit are configured to operate at both the first and second voltages. Associated memory macro and method are also disclosed.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: May 1, 2018
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LTD.
    Inventors: Jonathan Tsung-Yung Chang, Chiting Cheng, Cheng Hung Lee, Hung-Jen Liao, Michael Clinton
  • Patent number: 9946466
    Abstract: An electronic device may include first and second semiconductor chips. The first semiconductor chip may include a processor and a first memory. The second semiconductor chip may include a second memory. The first memory and second memory may be configured to exchange first data and second data with the processor, respectively. The processor may be configured to exchange target data processed or to be processed with the first and second memories. The processor may be configured to determine the target data as the first data if the number of accesses of the target data is equal to or greater than a first reference value. The processor may be configured to determine the target data as the second data if the number of accesses of the target data is less than the first reference value.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: April 17, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Sang-Kil Lee
  • Patent number: 9940069
    Abstract: A method, article of manufacture, apparatus, and system for a paging cache is disclosed. The backup cache may be broken into pages, and a subset of these pages may be memory resident. The pages may be sequentially loaded into memory to improve cache performance.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: April 10, 2018
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Scott C. Auchmoody, Orit Levin-Michael, Scott H. Ogata
  • Patent number: 9934150
    Abstract: A circuit has an address generation circuit to produce a virtual address (VA) and an index signal and a multi-way cache circuit. The cache circuit has a plurality of Random Access Memory (RAM) groups and a hash function circuit to generate a hash output from the VA. Each RAM group includes RAMs respectively corresponding to the ways. The cache circuit selects, using the hash output, a selected RAM group of the RAM groups, and performs, using the index signal as an address, an operation using one or more RAMs of the selected RAM group. Controlling a multi-way cache circuit comprises determining a hash value using a VA, selecting, using the hash value, a RAM group from a plurality of RAM groups, and performing an operation by using one or more RAMs of the selected RAM group. The RAMs of each RAM group respectively correspond to the ways.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: April 3, 2018
    Assignee: Marvell International Ltd.
    Inventors: Viney Gautam, Yicheng Guo, Hunglin Hsu
  • Patent number: 9904470
    Abstract: Ownership of a memory unit in a data processing system is tracked by assigning an identifier to each software component in the data processing system that can acquire ownership of the memory unit. An ownership variable is updated with the identifier of the software component that acquires ownership of the memory unit whenever the memory unit is acquired.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: February 27, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Jerry W. Stevens
  • Patent number: 9886734
    Abstract: Various embodiments are generally directed to techniques to prefetch pixel data of one or more pixels adjacent to a pixel for which pixel data is retrieved where the prefetched pixel data may be stored in noncontiguous storage locations. A device comprising a processor component and a hint generation component executed by the processor component to embed a prefetch hint in an executable read instruction, the executable read instruction to retrieve pixel data of a specified pixel and the prefetch hint to retrieve pixel data of an adjacent pixel that is geometrically adjacent to the specified pixel. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 25, 2013
    Date of Patent: February 6, 2018
    Assignee: INTEL CORPORATION
    Inventors: Wei-Yu Chen, Guei-Yuan Lueh, Subramaniam Maiyuran
  • Patent number: 9875186
    Abstract: The present technology relates to managing data caching in processing nodes of a massively parallel processing (MPP) database system. A directory is maintained that includes a list and a storage location of the data pages in the MPP database system. Memory usage is monitored in processing nodes by exchanging memory usage information with each other. Each of the processing nodes manages a list and a corresponding amount of available memory in each of the processing nodes based on the memory usage information. Data pages are read from a memory of the processing nodes in response to receiving a request to fetch the data pages, and a remote memory manager is queried for available memory in each of the processing nodes in response to receiving the request. The data pages are distributed to the memory of the processing nodes having sufficient space available for storage during data processing.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: January 23, 2018
    Assignee: FutureWei Technologies, Inc.
    Inventors: Huaizhi Li, Qingqing Zhou, Guogen Zhang
  • Patent number: 9830092
    Abstract: A storage manager can reduce the overhead of parity based fault tolerance by leveraging the access performance of SSDs for the parities. Since reading a parity value can be considered a small read operation, the reading of parity from an SSD is an effectively “free” operation due to the substantially greater SSD read performance. With reading parity being an effectively free operation, placing parity on SSDs eliminates the parity read operations (in terms of time) from the parity based fault tolerance overhead. A storage manager can selectively place parity on SSDs from HDDs based on a criterion or criteria, which can relate to frequency of access to the data corresponding to the parity. The caching criterion can be defined to ensure the reduced overhead gained by reading parity values from a SSD outweighs any costs (e.g., SSD write endurance).
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: November 28, 2017
    Assignee: NetApp, Inc.
    Inventors: Brian D. McKean, Sandeep Kumar R. Ummadi
  • Patent number: 9811461
    Abstract: In an embodiment of the invention, a method comprises: requesting an update or modification on a control data in at least one flash block in a storage memory; requesting a cache memory; replicating, from the storage memory to the cache memory, the control data to be updated or to be modified; moving a clean cache link list to a dirty cache link list so that the dirty cache link list is changed to reflect the update or modification on the control data; and moving the dirty cache link list to a for flush link list and writing an updated control data from the for flush link list to a free flash page in the storage memory.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: November 7, 2017
    Assignee: BiTMICRO Networks, Inc.
    Inventors: Marvin Dela Cruz Fenol, Jik-Jik Oyong Abad, Precious Nezaiah Umali Pestano
  • Patent number: 9792448
    Abstract: A processor employs a hardware encryption module in the processor's memory access path to cryptographically isolate secure information. In some embodiments, the encryption module is located at a memory controller (e.g. northbridge) of the processor, and each memory access provided to the memory controller indicates whether the access is a secure memory access, indicating the data associated with the memory access is designated for cryptographic protection, or a non-secure memory access. For secure memory accesses, the encryption module performs encryption (for write accesses) or decryption (for read accesses) of the data associated with the memory access.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: October 17, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Kaplan, Thomas Roy Woller, Ronald Perez
  • Patent number: 9785581
    Abstract: A system, methods, and apparatus for determining memory distribution across multiple non-uniform memory access processing nodes are disclosed. An apparatus includes processing nodes, each including processing units and main memory serving as local memory. A bus connects the processing units of each processing node to different main memory of a different processing node as shared memory. Access to local memory has lower memory access latency than access to shared memory. The processing nodes execute threads distributed across the processing nodes, and detect memory accesses made from each processing node for each thread. The processing nodes determine locality values for the thread that represent the fraction of memory accesses made from the processing nodes, and determine processing time values for the threads for a sampling period. The processing nodes determine weighted locality values for the threads, and determine a memory distribution across the processing nodes based on the weighted locality values.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: October 10, 2017
    Assignee: Red Hat, Inc.
    Inventor: Henri van Riel
  • Patent number: 9772938
    Abstract: Apparatuses, systems, methods, and computer program products are disclosed. A method includes tracking which portions of data stored in a volatile memory buffer are not yet stored in a non-volatile memory medium. A volatile memory buffer may be accessible using memory semantics. A volatile memory buffer may be associated with logic configured to ensure that the data stored in the volatile memory buffer is non-volatile. A method includes maintaining consistency of data between a volatile memory buffer and a non-volatile memory medium based on tracked portions of the data. A method includes copying at least portions of data not yet stored in a non-volatile memory medium to the non-volatile memory medium in response to a trigger.
    Type: Grant
    Filed: September 30, 2013
    Date of Patent: September 26, 2017
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Nisha Talagala, David Flynn
  • Patent number: 9729659
    Abstract: The subject disclosure is directed towards using primary data deduplication concepts for more efficient access of data via content addressable caches. Chunks of data, such as deduplicated data chunks, are maintained in a fast access client-side cache, such as containing chunks based upon access patterns. The chunked content is content addressable via a hash or other unique identifier of that content in the system. When a chunk is needed, the client-side cache (or caches) is checked for the chunk before going to a file server for the chunk. The file server may likewise maintain content addressable (chunk) caches. Also described are cache maintenance, management and organization, including pre-populating caches with chunks, as well as using RAM and/or solid-state storage device caches.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: August 8, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sudipta Sengupta, Chenguang Zhu, Chun Ho Cheung, Jin Li, Abhishek Gupta
  • Patent number: 9727475
    Abstract: An apparatus and method are described for distributed snoop filtering. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data; first snoop logic to track a first plurality of cache lines stored in a mid-level cache (“MLC”) accessible by one or more of the cores, the first snoop logic to allocate entries for cache lines stored in the MLC and to deallocate entries for cache lines evicted from the MLC, wherein at least some of the cache lines evicted from the MLC are retained in a level 1 (L1) cache; and second snoop logic to track a second plurality of cache lines stored in a non-inclusive last level cache (NI LLC), the second snoop logic to allocate entries in the NI LLC for cache lines evicted from the MLC and to deallocate entries for cache lines stored in the MLC, wherein the second snoop logic is to store and maintain a first set of core valid bits to identify cores containing copies of the cache lines stored in the NI LLC.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: August 8, 2017
    Assignee: Intel Corporation
    Inventors: Rahul Pal, Ishwar Agarwal, Yen-Cheng Liu, Joseph Nuzman, Ashok Jagannathan, Bahaa Fahim, Nithiyanandan Bashyam
  • Patent number: 9721660
    Abstract: A volatile memory data save subsystem may include a coupling to a shared power source such as a chassis or rack battery, or generator. A data save trigger controller sends a data save command toward coupled volatile memory device(s) such as NVDIMMs and PCIe devices under specified conditions: a programmable amount of time passes without AC power, a voltage level drops below normal but is still sufficient to power the volatile memory device during a data save operation, the trigger controller is notified of an operating system shutdown command, or the trigger controller is notified of an explicit data save command without a system shutdown command. NVDIMMs can avoid reliance on dedicated supercapacitors and dedicated batteries. An NVDIMM may perform an asynchronous DRAM reset in response to the data save command. Voltage step downs may be coordinated among power supplies. After data is saved, power cycles and the system reboots.
    Type: Grant
    Filed: December 2, 2014
    Date of Patent: August 1, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bryan Kelly, Sriram Govindan, John J. Siegler, Badriddine Khessib, Mark A. Shaw, J. Michael Andrewartha
  • Patent number: 9710514
    Abstract: Systems and methods are provided for using metadata to efficiently access object data from two or more storage components. Control circuitry receives a request from a host device to perform an operation on a uniquely identified object in a storage system comprising at least a first storage component and a second storage component. Control circuitry retrieves metadata information about the location of the object in store, wherein the metadata information comprises a first indication of a location of the object in the first storage component and a second indication of a location of the object in the second storage component. The objects in one or both of the first and second storage components are located based on the retrieved metadata information, and the requested operation is performed on the requested object.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: July 18, 2017
    Assignee: Marvell International Ltd.
    Inventors: Abhijeet P. Gole, Ram Kishore Johri
  • Patent number: 9690360
    Abstract: Technologies for discontinuous execution include a compiler computing device and one or more target computing devices. The compiler computing device converts a computer program into a sequence of atomic transactions and coalesces the transactions to generate additional sequences of transactions. The compiler computing device generates an executable program including two or more sequences of transactions having different granularity. A target computing device selects an active sequence of transactions from the executable program based on the granularity of the sequence and a confidence level. The confidence level is indicative of available energy produced by an energy harvesting unit of the target computing device. The target computing device increases the confidence level in response to successfully committing transactions from the active sequence of transactions into non-volatile memory.
    Type: Grant
    Filed: December 26, 2015
    Date of Patent: June 27, 2017
    Assignee: Intel Corporation
    Inventor: Sara S. Baghsorkhi
  • Patent number: 9667389
    Abstract: A device and method for selectively using an internal memory and an external memory when processing Hybrid Automatic Repeat reQuest (HARQ) data are provided. The device includes a combiner configured to receive a first HARQ burst; an internal memory positioned within the device; and a memory selector configured to compare a size of the first HARQ burst with a predetermined threshold, to select one of the internal memory and an external memory positioned outside the device according to a comparison result, and to store the first HARQ burst in a selected memory. At least one among a size of the internal memory and the threshold is determined based on a characteristic of a first service type that has been predetermined.
    Type: Grant
    Filed: October 20, 2014
    Date of Patent: May 30, 2017
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Hae Chul Lee, Chae Hag Yi, Hyeong Seok Jeong, Jun Ho Huh
  • Patent number: 9645901
    Abstract: An embodiment of the invention provides a method comprising: performing an application write; storing the application write in a local cache; performing a system call to request an underlying storage system to persist any application writes that are not yet persisted; and in response to the system call, copying the application write in the cache to a shared permanent storage. In another embodiment of the invention, an apparatus comprises: an application configured to perform an application write; a cache software configured to store the application write in a local cache; wherein the application is configured to perform a system call to request an underlying storage system to persist any application writes that are not yet persisted; and in response to the system call, the cache software is configured to copy the application write in the cache to a shared permanent storage.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: May 9, 2017
    Assignee: PrimaryIO, Inc.
    Inventor: Murali Nagaraj
  • Patent number: 9646185
    Abstract: A system for managing a population of RFID tags where the system may include: an interrogator configured to transmit a select command to the population of RFID tags, and at least one modified tag in the population of RFID tags. The select command may include information specifying a memory location. The modified tag may include a memory configured with a memory address corresponding to the memory location specified by the select command, and a controller configured to perform at least one action upon the at least one modified tag receiving the select command. The at least one action may be based on the memory location specified by the select command.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: May 9, 2017
    Assignee: NXP B.V.
    Inventor: Roland Brandl
  • Patent number: 9612960
    Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: April 4, 2017
    Assignee: Florida State University Research Foundation, Inc.
    Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
  • Patent number: 9584620
    Abstract: Embodiments include method, systems and computer program products for caching in storage clients. In some embodiments, a storage client for accessing a storage service from a computer program may be provided. A cache may be integrated within the storage client for reducing a number of accesses to the storage service. An application may be used the cache to reduce accesses to the storage service, wherein the application is implemented by a computer program. In response to the storage service being unresponsive or responding too slowly, the application may use the cache to allow the application to continue without communicating with the storage service.
    Type: Grant
    Filed: December 31, 2015
    Date of Patent: February 28, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Arun K. Iyengar
  • Patent number: 9576663
    Abstract: Multi-port memory circuitry includes single-port memory circuitry, and arbitration logic circuitry that accepts multiple memory queries for the single-port memory circuitry and prevents the multiple memory queries from addressing conflicting portions of the single-port memory circuitry within a single clock cycle. The arbitration logic circuitry may include conflict-resolution logic circuitry that determines whether multiple memory queries address conflicting portions of the single-port memory circuitry. The single-port memory circuitry may be divided into a plurality of sub-arrays, and the conflict-resolution logic circuitry determines whether the multiple memory queries address overlapping groups of sub-arrays. The single-port memory circuitry may be a content-addressable memory or a random-access memory. The multi-port memory circuitry may be part of a shared-memory, multi-processor apparatus.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: February 21, 2017
    Assignee: Marvell International Ltd.
    Inventors: Hillel Gazit, Sohail Syed, Gevorg Torjyan
  • Patent number: 9563560
    Abstract: A coherency controller, such as one used within a system-on-chip, is capable of issuing different types of snoops to coherent caches. The coherency controller chooses the type of snoop based on the type of request that caused the snoops or the state of the system or both. By so doing, coherent caches provide data when they have sufficient throughput, and are not required to provide data when they do not have insufficient throughput.
    Type: Grant
    Filed: July 10, 2013
    Date of Patent: February 7, 2017
    Assignee: Qualcomm Technologies, Inc.
    Inventors: Laurent Moll, Jean-Jacques Lecler
  • Patent number: 9542317
    Abstract: A system for data processing with management of a cache consistency in a network of processors including cache memories, the network including plural nodes for access to a main memory interconnected with one another, a set of directories being distributed between nodes of the network, each directory including a table of correspondence between cache lines and information fields on the cache lines. The system includes a first sub-network for interconnection of the nodes with one another, implementing a first message transmission protocol providing read/write access to the directories during any passage in the corresponding nodes of a message passing through the first sub-network, and a second sub-network for interconnection of the nodes with one another, implementing a second message transmission protocol, the second protocol excluding any read/write access to the directories during any passage in the corresponding nodes of a message passing through the second sub-network.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: January 10, 2017
    Assignees: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES, BULL SAS
    Inventors: Christian Bernard, Eric Guthmuller, Huy Nam Nguyen
  • Patent number: 9515946
    Abstract: Incoming frame data is stored in a plurality of dual linked lists of buffers in a pipelined memory. The dual linked lists of buffers are maintained by a link manager. The link manager maintains, for each dual linked list of buffers, a first head pointer, a second head pointer, a first tail pointer, a second tail pointer, a head pointer active bit, and a tail pointer active bit. The first head and tail pointers are used to maintain the first linked list of the dual linked list. The second head and tail pointers are used to maintain the second linked list of the dual linked list. Due to the pipelined nature of the memory, the dual linked list system can be popped to supply dequeued values at a sustained rate of more than one value per the read access latency time of the pipelined memory.
    Type: Grant
    Filed: July 1, 2014
    Date of Patent: December 6, 2016
    Assignee: Netronome Systems, Inc.
    Inventor: Joseph M. Lamb
  • Patent number: 9501410
    Abstract: Multiple nodes of a cluster have associated non-shared, local caches, used to cache shared storage content. Each local cache is accessible only to the node with which it is associated, whereas the cluster-level shared storage is accessible by any of the nodes. Attempts to access the shared storage by the nodes of the cluster are monitored. Information is tracked concerning the current statuses of the local caches of the nodes of the cluster. Current tracked local cache status information is maintained, and stored such that it is accessible by the multiple nodes of the cluster. The current tracked local cache status information is used in conjunction with the caching functionality to determine whether specific nodes of the cluster are to access their local caches or the shared storage to obtain data corresponding to specific regions of the shared storage.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 22, 2016
    Assignee: Veritas Technologies LLC
    Inventors: Santosh Kalekar, Niranjan Pendharkar, Shailesh Marathe
  • Patent number: 9501248
    Abstract: An information processing apparatus includes a first controller, a second controller, a non-volatile storage medium, and a volatile storage medium. The non-volatile storage medium is able to store data under control by the first controller, and unable to store data under control by the second controller. The volatile storage medium is able to store data under control by the second controller such that the data are readable therefrom under control by the first controller. The second controller includes a first storage unit that stores history data of operation performed under control by the second controller in the volatile storage medium. The first controller includes a reading unit and a second storage unit. The reading unit reads the history data stored in the volatile storage medium by the first storage unit. The second storage unit stores the history data read by the reading unit in the non-volatile storage medium.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: November 22, 2016
    Assignee: FUJI XEROX CO., LTD
    Inventor: Kentaro Ikeda
  • Patent number: 9495395
    Abstract: Data can be categorized into facts, information, hypothesis, and directives. Activities that generate certain categories of data based on other categories of data through the application of knowledge which can be categorized into classifications, assessments, resolutions, and enactments. Activities can be driven by a Classification-Assessment-Resolution-Enactment (CARE) control engine. The CARE control and these categorizations can be used to enhance a multitude of systems, for example diagnostic system, such as through historical record keeping, machine learning, and automation. Such a diagnostic system can include a system that forecasts computing system failures based on the application of knowledge to system vital signs such as thread or stack segment intensity and memory heap usage. These vital signs are facts that can be classified to produce information such as memory leaks, convoy effects, or other problems.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: November 15, 2016
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Eric S. Chan, Rafiul Ahad, Adel Ghoneimy, Adriano Covello Santos
  • Patent number: 9495292
    Abstract: A computer-executable method, system, and computer program product of managing a hierarchical data storage system, wherein the data storage system includes a first level of one or more hosts, a second level of one or more storage appliances, and a data storage array, the computer-executable method, system, and computer program product comprising receiving an I/O request from a first host of the one or more hosts, wherein the I/O request relates to a portion of data on the data storage array, analyzing the I/O request to determine a status of the portion of data on the data storage system, based on the determination, providing an update to a second host of the one or more hosts based on the I/O request, wherein the portion of data is cached on the second host of the one or more hosts, and processing I/O request by sending I/O request to data storage array.
    Type: Grant
    Filed: December 31, 2013
    Date of Patent: November 15, 2016
    Assignee: EMC IP Holding Company, LLC
    Inventors: Randall H. Shain, Roy E. Clark, Alexandr Veprinsky, Arieh Don, Philip Derbeko, Yaron Dar
  • Patent number: 9483276
    Abstract: Embodiments relate to management of shared transactional resources. A system includes a transactional facility configured to support transactions that effectively delay committing stores to memory or results to an architectural state until transaction completion. The system includes a processor configured to perform an allocation or arbitration of processing resources to instructions of a transaction within a thread. The processor detects that the transaction has exceeded a manageable capacity of a resource or a potential collision of a transactional instruction storage access has occurred, resulting in a transaction abort. A transaction abort reason and a current configuration are examined to determine whether the transaction abort was based on an initiating program exceeding a restricted limit on the manageable capacity of the resource or an allocation. A processor state is updated to increase a likelihood of success upon retrying the transaction.
    Type: Grant
    Filed: April 23, 2013
    Date of Patent: November 1, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fadi Y. Busaba, Brian W. Thompto
  • Patent number: 9471511
    Abstract: The present disclosure relates to techniques for system and methods for software-based management of protected data-blocks insertion into the memory cache mechanism of a computerized device. In particular the disclosure relates to preventing protected data blocks from being altered and evicted from the CPU cache coupled with buffered software execution. The technique is based upon identifying at least one conflicting data-block having a memory mapping indication to a designated memory cache-line and preventing the conflicting data-block from being cached. Functional characteristics of the software product of a vendor, such as gaming or video, may be partially encrypted to allow for protected and functional operability and avoid hacking and malicious usage of non-licensed user.
    Type: Grant
    Filed: November 24, 2013
    Date of Patent: October 18, 2016
    Assignee: Truly Protect OY
    Inventors: Michael Kiperberg, Amit Resh, Nezer Zaidenberg
  • Patent number: 9465594
    Abstract: A distributed code including a plurality of programs is created based on a sequential code that includes at least one call of a first function associated with a future, where at least a first of the plurality of programs is to execute the first function associated with the future, and at least a second of the plurality of programs is to execute a second function in a present section of the sequential code. A normalization function is included in each of the plurality of programs to normalize virtual addresses accessed by the first and second functions.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: October 11, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Pramod G. Joisha
  • Patent number: 9454329
    Abstract: In one embodiment, a system on a chip (SoC) includes a plurality of processor cores and a memory controller to control communication between the SoC and a memory coupled to the memory controller. The memory controller may be configured to send mirrored command and address signals to a first type of memory device and to send non-mirrored control and address signals to a second type of memory device. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: September 27, 2016
    Assignee: Intel Corporation
    Inventors: Christopher E. Cox, Rebecca Z. Loop, Christopher P. Mozak
  • Patent number: 9448741
    Abstract: Piggy-back snoops are used for non-coherent memory transactions in distributed processing systems. Coherent and non-coherent memory transactions are received from a plurality of processing cores within a distributed processing system. Non-coherent snoop information for the non-coherent memory transactions is combined with coherent snoop information for the coherent memory transactions to form expanded snoop messages. The expanded snoop messages are then output to a snoop bus interconnect during snoop cycles for the distributed processing system. As such, when the processing cores monitor the snoop bus interconnect, the processing cores receive the non-coherent snoop information along with coherent snoop information within the same snoop cycle. While this piggy-backing of non-coherent snoop information with coherent snoop information uses an expanded snoop bus interconnect, usage of the coherent snoop bandwidth is significantly reduced thereby improving overall performance of the distributed processing system.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: September 20, 2016
    Assignee: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Sanjay R. Deshpande, John E. Larson, Fernando A. Morales, Thang Q. Nguyen, Mark A. Banse
  • Patent number: 9448940
    Abstract: A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.
    Type: Grant
    Filed: October 26, 2012
    Date of Patent: September 20, 2016
    Assignee: The Regents of the University of California
    Inventors: John Shalf, David Donofrio, Leonid Oliker
  • Patent number: 9424194
    Abstract: A computer cache memory organization called Probabilistic Set Associative Cache (PAC) has the hardware complexity and latency of a direct-mapped cache but functions as a set-associative cache for a fraction of the time, thus yielding better than direct mapped cache hit rates. The organization is considered a (1+P)—way set associative cache, where the chosen parameter called Override Probability P determines the average associativity, for example, for P=0.1, effectively it operates as if a 1.1-way set associative cache.
    Type: Grant
    Filed: May 1, 2012
    Date of Patent: August 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, John Dodson, Moinuddin K. Qureshi, Balaram Sinharoy
  • Patent number: 9418091
    Abstract: A computer system includes at least one processor and at least one memory operably coupled to the at least one processor. The memory includes a memory pool and a database partitioned into multiple fragments. Each of the fragments is allocated a block of memory from the memory pool and the fragments store compressed data in a columnar table format. A database operation is applied in a compressed format to the compressed data in at least one of the fragments.
    Type: Grant
    Filed: September 20, 2013
    Date of Patent: August 16, 2016
    Assignee: SAP SE
    Inventors: Wen-Syan Li, Bin Dong, Zheng Long Wei, Yingyu Chen, Yongyuan Shen
  • Patent number: 9411733
    Abstract: A method and directory system that recognizes and represents the subset of sharing patterns present in an application is provided. As used herein, the term sharing pattern refers to a group of processors accessing a single memory location in an application. The sharing pattern is decoupled from each cache line and held in a separate directory table. The sharing pattern of a cache block is the bit vector representing the processors that share the block. Multiple cache lines that have the same sharing pattern point to a common entry in the directory table. In addition, when the table capacity is exceeded, patterns that are similar to each other are dynamically collated into a single entry.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: August 9, 2016
    Assignee: University of Rochester
    Inventors: Hongzhou Zhao, Arrvindh Shriraman, Sandhya Dwarkadas
  • Patent number: 9411693
    Abstract: Technologies are generally described that relate to processing cache coherence information and processing a request for a data block. In some examples, methods for processing cache coherence information are described that may include storing in a directory a tag identifier effective to identify a data block. The methods may further include storing a state identifier in association with the tag identifier. The state identifier may be effective to identify a coherence state of the data block. The methods may further include storing sharer information in association with the tag identifier. The sharer information may be effective to indicate one or more caches storing the data block. The methods may include storing, by the controller in the directory, replication information in association with the sharer information. The replication information may be effective to indicate a type of replication of the sharer information in the directory, and effective to indicate replicated segments.
    Type: Grant
    Filed: July 31, 2012
    Date of Patent: August 9, 2016
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 9389864
    Abstract: A processor unit (200) includes: cache memory (210); an instruction execution unit (220); a processing unit (230) that detects fact that a thread enters an exclusive control section which is specified in advance to become a bottleneck; a processing unit (240) that detects a fact that the thread exits the exclusive control section; and an execution flag (250) that indicates whether there is the thread that is executing a process in the exclusive control section based on detection results. The cache memory (210) temporarily stores a priority flag in each cache entry, and the priority flag indicates whether data is to be used during execution in the exclusive control section. When the execution flag (250) is set, the processor unit (200) sets the priority flag that belongs to an access target of cache entries. The processor unit (200) leaves data used in the exclusive control section in the cache memory by determining a replacement target of cache entries using the priority flag when a cache miss occurs.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: July 12, 2016
    Assignee: NEC CORPORATION
    Inventor: Takashi Horikawa
  • Patent number: 9378151
    Abstract: The disclosure is directed to a system and method of cache management for a data storage system. According to various embodiments, the cache management system includes a hinting driver, a priority controller, and a data scrubber. The hinting driver generates pointers based upon data packets intercepted from data transfer requests being processed by a host controller of the data storage system. The priority controller determines whether the data transfer request includes a request to discard a portion of data based upon the pointers generated by the hinting driver. If the priority controller determines that data transfer request includes a request to discard a portion of data, the data scrubber locates and removes the portion of data from the cache memory so that the cache memory is freed from invalid data (e.g. data associated with a deleted file).
    Type: Grant
    Filed: September 4, 2013
    Date of Patent: June 28, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Vineet Agarwal, Durga Prasad Bhattarai, Sourav Saha
  • Patent number: 9372795
    Abstract: Provided are an apparatus and method for maintaining cache coherency, and a multiprocessor apparatus using the method. The multiprocessor apparatus includes a main memory, a plurality of processors, a plurality of cache memories that are connected to each of the plurality of processors, a memory bus that is connected to the plurality of cache memories and the main memory, and a coherency bus that is connected to the plurality of cache memories to transmit coherency related information between caches. Accordingly, a bandwidth shortage phenomenon may be reduced in an on-chip communication structure, which occurs when using a communication structure between a memory and a cache, and communication for coherency between caches may be simplified.
    Type: Grant
    Filed: September 18, 2013
    Date of Patent: June 21, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Jin Ho Han
  • Patent number: 9361032
    Abstract: An application server can be configured to access data stored on a networked storage server that is accessible over a network and that includes a cache device configured to store data received from the networked storage server. The application server can include a cache management module that is designed to monitor a data access requests transmitted over the network, the data access requests specifying a first page of data. In response to an indication that the requested data includes data stored in the cache device as an existing page of data, the first page of data can be mapped to a location corresponding to the existing page.
    Type: Grant
    Filed: May 14, 2014
    Date of Patent: June 7, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lawrence Y. Chiu, Hyojun Kim, Maohua Lu, Paul H. Muench, Sangeetha Seshadri