Stack Cache Patents (Class 711/132)
  • Patent number: 11907200
    Abstract: Apparatuses, systems, methods, and computer program products are disclosed for persistent memory management. Persistent memory management may include replicating a persistent data structure in volatile memory buffers of at least two non-volatile storage devices. Persistent memory management may include preserving a snapshot copy of data in association with completion of a barrier operation for the data. Persistent memory management may include determining which interface of a plurality of supported interfaces is to be used to flush data from a processor complex.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: February 20, 2024
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Nisha Talagala, Swaminathan Sundararaman, David Flynn
  • Patent number: 11681630
    Abstract: A device for processing commands to manage non-volatile memory includes a controller configured to obtain address information from a command, read, based on the address information, an entry of a metadata table, and determine, based on the entry of the metadata table, whether a metadata page corresponding to the address information is being processed by the controller. In response to determining that the metadata page corresponding to the address information is being processed, the controller determines a processing status of the metadata page, among a plurality of processing statuses, based on the entry of the metadata table and processes the command according to the processing status of the first metadata page. In response to determining that the metadata page corresponding to first address information is not being processed, the controller reads the metadata page from the non-volatile memory based on the entry of the metadata table.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: June 20, 2023
    Assignee: KIOXIA CORPORATION
    Inventors: Andrew John Tomlin, Michael Anthony Moser
  • Patent number: 11537402
    Abstract: A method for operation of a processor core is provided. First instruction data is consulted to determine whether a second instruction has execution data that matches the first instruction data. The first instruction data is from a first instruction. In response to determining that the second instruction has execution data that matches the first instruction data, prior data is copied into the second instruction. The first instruction depends on the prior data. After receiving an availability indication of the prior data, both the first instruction and the second instruction are woken for execution, without requiring execution of the first instruction before waking of the second instruction. The second instruction is executed by using the prior data as a skip of the first instruction. A computer system and a processor core configured to operate according to the method are also disclosed herein.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: December 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian D. Barrick, Bryan Lloyd, Dung Q. Nguyen, Brian W. Thompto, Edmund Joseph Gieske, John B. Griswell, Jr.
  • Patent number: 11468948
    Abstract: Methods, systems, and devices for cleaning memory blocks using multiple types of write operations are described. A counter may be incremented each time a write command is received. In response to the counter reaching a threshold, the counter may be reset and a flag may be set. Each time a cleaning of a memory block is to take place, the flag may be checked. If the flag is set, the memory block may be cleaned using a second type of cleaning operation, such as one using a force write approach. Otherwise, the memory block may be cleaned using a first type of cleaning operation, such as one using a normal write approach. Once set, the flag may be reset after one or more memory blocks are cleaned using the second type of cleaning operation.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: October 11, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Nicola Del Gatto
  • Patent number: 11455125
    Abstract: Detecting and remediating memory leaks associated with an application environment can include monitoring allocations of memory from a managed memory space to respective operations to produce memory allocation data and monitoring deallocations of memory to at least some of the respective operations to produce memory deallocation data. A trend in memory leakage can be determined based on samples of the memory allocation or deallocation data. A projection of future memory usage by operations associated with the trend can be determined using binned sets of the memory allocation data and the memory deallocation data. A predicted time at which memory usage by the operations associated with the trend is expected to exceed a threshold can be determined using the projection of future memory usage. A remediation action can be performed before the predicted time to prevent a memory constraint from occurring with respect to the application environment.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: September 27, 2022
    Assignee: ServiceNow, Inc.
    Inventor: Carmine Mangione-Tran
  • Patent number: 11411868
    Abstract: A method and device for packet processing implemented by a packet processing device is described. The packet processing device is connected to a communication network from which the packet processing device receives and/or transmits packets in a context of network service chaining. The method includes obtaining a set of packets, each packet of the set of packets comprising at least one specific characteristic; grouping the packets of the set of packets according to the at least one specific characteristic, and delivering at least two subsets of packets; and adding, to at least one of the subsets of packets, metadata common to the packets of the at least one subset of packets.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: August 9, 2022
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Stephane Gouache, Charles Salmon-Legagneur, Jean Le Roux
  • Patent number: 11372777
    Abstract: A memory interface for interfacing between a memory bus addressable using a physical address space and a cache memory addressable using a virtual address space, the memory interface comprising: a memory management unit configured to maintain a mapping from the virtual address space to the physical address space; and a coherency manager comprising a reverse translation module configured to maintain a mapping from the physical address space to the virtual address space; wherein the memory interface is configured to: receive a memory read request from the cache memory, the memory read request being addressed in the virtual address space; translate the memory read request, at the memory management unit, to a translated memory read request addressed in the physical address space for transmission on the memory bus; receive a snoop request from the memory bus, the snoop request being addressed in the physical address space; and translate the snoop request, at the coherency manager, to a translated snoop request addr
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: June 28, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Martin John Robinson, Mark Landers
  • Patent number: 11322474
    Abstract: A semiconductor package includes a first chip and a second chip arranged side by side on a carrier substrate. The first chip is provided with a high-speed signal pads along a first side in proximity to the second chip. The second chip includes a redistribution layer, and the redistribution layer is provided with data (DQ) pads along the second side in proximity to the first chip. A plurality of first bonding wires is provided to directly connect the high-speed signal pads to the DQ pads. The redistribution layer of the second chip is provided with first command/address (CA) pads along the third side opposite to the second side, and a plurality of dummy pads corresponding to the first CA pads. The plurality of dummy pads are connected to second CA pads disposed along a fourth side of the second chip via interconnects of the redistribution layer.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: May 3, 2022
    Assignee: Realtek Semiconductor Corp.
    Inventors: Chin-Yuan Lo, Chih-Hao Chang, Tze-Min Shen
  • Patent number: 11288199
    Abstract: A determination can be made of a type of memory access workload for an application. A determination can be made whether the memory access workload for the application is associated with sequential read operations. The data associated with the application can be stored at one of a cache of a first type or another cache of a second type based on the determination of whether the memory workload for the application is associated with sequential read operations.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: March 29, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Dhawal Bavishi
  • Patent number: 11182321
    Abstract: Techniques are provided for characterizing and quantifying a sequentiality of workloads using sequentiality profiles and signatures. One exemplary method comprises obtaining telemetry data for an input/output workload; evaluating a distribution over time of sequence lengths for input/output requests in the telemetry data by the input/output workload; and generating a sequentiality profile for the input/output workload to characterize the input/output workload based at least in part on the distribution over time of the sequence lengths. Multiple sequentiality profiles for one or more input/output workloads may be clustered into a plurality of clusters. A sequentiality signature may be generated to represent one or more sequentiality profiles within a given cluster. A performance of data movement policies may be evaluated with respect to the sequentiality signature of the given cluster.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: November 23, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Rômulo Teixeira de Abreu Pinho, Hugo de Oliveira Barbalho, Vinícius Michel Gottin, Roberto Nery Stelling Neto, Alex Laier Bordignon, Daniel Sadoc Menasché
  • Patent number: 11165751
    Abstract: System and method for establishing simultaneous zones of control for various communications to and from a computing devices based on specific criteria corresponding to more than one zone of control encompassing similarly-situated network locations. A browser (or any other common term for a networked computing session GUI) executing on a computing device may establish several zones of restricted data interaction based on specific user-defined criteria, thereby establishing simultaneous zones that respectively correspond to one or more specific parameters with regard to networked computer interactions. For example, a first zone may be associated with only other computers located in the United States (as determined by DNS records and the like) and a second zone may be associated with only other computers located with a specific domain (e.g., www.mybusiness.com).
    Type: Grant
    Filed: February 16, 2018
    Date of Patent: November 2, 2021
    Assignee: Emerald Cactus Ventures, Inc.
    Inventors: Jesse Aaron Adams, Christopher Joseph O'Connell, Jennifer Marie Catanduanes McEwen
  • Patent number: 11122013
    Abstract: A system and method for establishing zones of control for communications among computing devices. Zones of control refer to the concept of unique user-controlled silos separating the interactions between computer devices over the network. When the user of a device connects to a networked computing environment of any kind, at least some data may be sent from the user's device onto the network, as well as downloaded to the user's device. These “data interactions” are usually frequent and numerous. With a private encrypted browsing session established, communications within an established zone of control may be isolated from all other communications and vice versa.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: September 14, 2021
    Assignee: Emerald Cactus Ventures, Inc.
    Inventors: Jesse Aaron Adams, Christopher Joseph O'Connell, Jennifer Marie Catanduanes McEwen
  • Patent number: 10977177
    Abstract: A pre-fetching technique determines what data, if any, to pre-fetch on a per-logical storage unit basis. For a given logical storage unit, what, if any, data to prefetch is based at least in part on a collective sequential proximity of the most recently requested pages of the logical storage unit. Determining what, if any, data to pre-fetch for a logical storage unit may include determining a value for a proximity metric indicative of the collective sequential proximity of the most recently requested pages, comparing the value to a predetermined proximity threshold value, and determining whether to pre-fetch one or more pages of the logical storage unit based on the result of the comparison. A data structure may be maintained that includes most recently requested pages for one or more logical storage units. This data structure may be a table.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: April 13, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Tiago Calmon, Romulo D. Pinho, Jonas F. Dias, Eduardo Sousa, Roberto Nery Stelling Neto, Hugo de Oliveira Barbalho
  • Patent number: 10963387
    Abstract: A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: March 30, 2021
    Assignee: International Business Machines Corporation
    Inventors: Harold W. Cain, III, Vijayalakshmi Srinivasan, Jason Zebchuk
  • Patent number: 10901887
    Abstract: A system and method of buffered freepointer management to handle burst traffic to fixed size structures in an external memory system. A circular queue stores implicitly linked free memory locations, along with an explicitly linked list in memory. The queue is updated at the head with newly released locations, and new locations from memory are added at the tail. When a freed location in the queue is reused, external memory need not be updated. When the queue is full, the system attempts to release some of the freepointers such as by dropping them if they are already linked, updating the linked list in memory only if those dropped are not already linked. Latency can be further reduced by loading new locations from memory when the queue is nearly empty, rather than waiting for empty condition, and by writing unlinked locations to memory when the queue is nearly full.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Philip Jacob, Philip Strenski
  • Patent number: 10853310
    Abstract: Apparatuses and methods of their operation are disclosed. A call stack is maintained which comprises subroutine information relating to subroutines which have been called during data processing operations and have not yet returned. A stack pointer is indicative of an extremity of the call stack associated with a most recently called subroutine which has been called during the data processing operations and has not yet returned. Call stack sampling can be carried out with reference to the stack pointer. A tide mark pointer is maintained, which indicates of a value which the stack pointer had when the call stack sampling procedure processing circuitry was last completed. The call stack sampling procedure comprises retrieving subroutine information from the call stack indicated between the value of the tide mark pointer and the current value of the stack pointer. More efficient call stack sampling is thereby supported, in that only modifications to the call stack need be sampled.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: December 1, 2020
    Assignee: Arm Limited
    Inventor: Alasdair Grant
  • Patent number: 10789169
    Abstract: An apparatus and method are provided for controlling use of a register cache. The apparatus has execution circuitry for executing instructions to process data values, and a register file comprising a plurality of registers in which to store the data values for access by the execution circuitry. A register cache is also provided that has a plurality of entries and is arranged to cache a subset of the data values for access by the execution circuitry. Each entry is arranged to cache a data value and an indication of the register associated with that cached data value. Prefetch circuitry then performs prefetch operations to prefetch data values from the register file into the register cache. Timing indication storage is used to store, for each data value to be generated as a result of instructions being executed within the execution circuitry, a register identifier for that data value, and timing information indicating when that data value will be generated by the execution circuitry.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: September 29, 2020
    Assignee: ARM Limited
    Inventors: Luca Scalabrino, Frederic Jean Denis Arsanto, Claire Aupetit
  • Patent number: 10700980
    Abstract: A user traffic generation method includes receiving a user traffic generation instruction, performing, in response to the user traffic generation instruction and index information pre-stored in an on-chip static random access memory (SRAM) of a field programmable gate array, a prefetch operation and a cache operation on a user packet that is stored in a dynamic random access memory DRAM and indicated by the index information, and generating user traffic at a line rate of the user packet that is cached during the cache operation. The on-chip SRAM is configured to store index information of all user packets that need to be used. The DRAM is configured to store all the user packets.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: June 30, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Kaiyi Zhou, Meilong Deng
  • Patent number: 10579398
    Abstract: A computer-implemented method is provided for deleting a given object from among a plurality of objects in an object-oriented programming language computing system which uses a Reference Count (RC) of each of the plurality of objects to check a liveness of the plurality of objects. The method includes decrementing, in a Reference Counts (RCs) decrement operation, RCs of objects referenced from the given object using one or more non-atomic operations in a transaction that utilizes a hardware transactional memory mechanism to accelerate the reference counts decrement operation.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: March 3, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kiyokuni Kawachiya, Mikio Takeuchi
  • Patent number: 10552343
    Abstract: Various systems and methods for queue management in computer memory are described herein. A system for implementing a zero thrash cache queue manager includes a processor subsystem to: receive a memory access request for a queue; write data to a queue tail cache line in a cache when the memory access request is to add data to the queue, the queue tail cache line protected from being evicted from the cache; and read data from a current queue head cache line in the cache when the memory access request is to remove data from the queue, the current queue head cache line protected from being evicted from the cache.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: February 4, 2020
    Assignee: Intel Corporation
    Inventors: Barak Hermesh, Ziv Kfir, Amos Klimker, Doron Nakar, Lior Nevo
  • Patent number: 10430343
    Abstract: A communication bypass mechanism accelerates cache-to-cache data transfers for communication traffic between caching agents that have separate last-level caches. A method includes bypassing a last-level cache of a first caching agent in response to a cache line having a modified state being evicted from a penultimate-level cache of the first caching agent and a communication attribute of a shadow tag entry associated with the cache line being set. The communication attribute indicates prior communication of the cache line with a second caching agent having a second last-level cache.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: October 1, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Patrick N. Conway
  • Patent number: 10402259
    Abstract: The embodiments described herein provide systems and methods for recovering resources in processing devices. Specifically, the embodiments described herein provide techniques for recovering leaked resources allocated to hardware engines in a hardware processing core. As one example, the recovery of resources allocated to hardware engines can be facilitated by making a specified register available to monitoring software. When leaked or otherwise stuck resources are identified, the monitoring software can set the register to trigger the recovery of those resources. This recovery of resources can be then performed by stopping the execution of processes in the hardware engines, invalidating the resources previously allocated to the hardware engines, initializing the resources, and starting the handling of new processes in the hardware engines. This process effectively recovers those resources, and allows those hardware engines to quickly resume operations.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: September 3, 2019
    Assignee: NXP USA, Inc.
    Inventors: Uri Malka, Noam Efrati, Eyal Elimelech
  • Patent number: 10303523
    Abstract: A method and an apparatus that generate a request from a first thread of a process using a first stack for a second thread of the process to execute a code are described. Based on the request, the second thread executes the code using the first stack. Subsequent to the execution of the code, the first thread receives a return of the request using the first stack.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: May 28, 2019
    Assignee: Apple Inc.
    Inventors: Ronnie Misra, Joshua Shaffer
  • Patent number: 10229266
    Abstract: Corruption of call stacks is detected by using guard words placed in the call stacks. A store guard word instruction is used to store a guard word on a stack frame of a caller routine, and a verify guard word instruction issued by one or more callee routines is used to verify the guard word is an expected value. If the guard word is an unexpected value, corruption is indicated.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: March 12, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 10083040
    Abstract: Processing circuitry can operate in a secure domain and a less secure domain. In response to an initial exception from background processing performed by the processing circuitry, state saving of data from a first subset of registers is performed by exception control circuitry before triggering an exception handling routine, while the exception handling routine has responsibility for performing state saving of data from a second subset of registers. In response to a first exception causing a transition from the secure domain from a less secure domain, where the background processing was in the less secure domain, the exception control circuitry performs additional state saving of data from the second set of registers before triggering the exception handling routine. In response to a tail-chained exception causing a transition from the secure domain to the less secure domain, the exception handling routine is triggered without performing an additional state saving.
    Type: Grant
    Filed: July 10, 2015
    Date of Patent: September 25, 2018
    Assignee: ARM Limited
    Inventor: Thomas Christopher Grocutt
  • Patent number: 10061718
    Abstract: Described is a technology by which classes of memory attacks are prevented, including cold boot attacks, DMA attacks, and bus monitoring attacks. In general, secret state such as an AES key and an AES round block are maintained in on-SoC secure storage, such as a cache. Corresponding cache locations are locked to prevent eviction to unsecure storage. AES tables are accessed only in the on-SoC secure storage, to prevent access patterns from being observed. Also described is securely preparing for an interrupt-based context switch during AES round computations and securely resuming from a context switch without needing to repeat any already completed round or round of computations.
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: August 28, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Patrick J. Colp, Himanshu Raj, Stefan Saroiu, Alastair Wolman
  • Patent number: 10063603
    Abstract: Multi-user real-time collaborative software applications may synchronize data between multiple users or multiple devices. Current aspects describe a method and system for enabling undo operations in collaborative software applications where not all possible actions adhere to the operational transformation properties. Certain aspects herein operate in the absence of the so-called Inverse Property 2 (IP2).
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: August 28, 2018
    Assignee: LIVELOOP, INC
    Inventors: David Lee Nelson, Erin Rebecca Rhode, Adam Davis Kraft, Amal Kumar Dorai
  • Patent number: 9983887
    Abstract: Techniques for memory management of a data processing system are described herein. According to one embodiment, a memory usage monitor executed by a processor of a data processing system monitors memory usages of groups of programs running within a memory of the data processing system. In response to determining that a first memory usage of a first group of the programs exceeds a first predetermined threshold, a user level reboot is performed in which one or more applications running within a user space of an operating system of the data processing system are terminated and relaunched. In response to determining that a second memory usage of a second group of the programs exceeds a second predetermined threshold, a system level reboot is performed in which one or more system components running within a kernel space of the operating system are terminated and relaunched.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: May 29, 2018
    Assignee: Apple Inc.
    Inventors: Andrew D. Myrick, David M. Chan, Jonathan R. Reeves, Jeffrey D. Curless, Lionel D. Desai, James C. McIlree, Karen A. Crippes, Rasha Eqbal
  • Patent number: 9906590
    Abstract: Some embodiments provide intelligent predictive stream caching for live, linear, or video-on-demand streaming content using prefetching, segmented caching, and request clustering. Prefetching involves retrieving streaming content segments from an origin server prior to the segments being requested by users. Prefetching live or linear streaming content segments involves continually reissuing requests to the origin until the segments are obtained or a preset retry duration is completed. Prefetching is initiated in response to a first request for a segment falling within a particular interval. Request clustering commences thereafter. Subsequent requests are queued until the segments are retrieved. Segmented caching involves caching segments for one particular interval. Segments falling within a next interval are not prefetched until a first request for one such segment in the next interval is received.
    Type: Grant
    Filed: August 20, 2015
    Date of Patent: February 27, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Jonathan DiVincenzo, Seungyeob Choi, Karthik Sathyanarayana, Robert J. Peters, Eric Dyoniziak
  • Patent number: 9904539
    Abstract: A method and data processing system are disclosed for concurrently loading a plurality of new modules while code of a plurality of modules of an original (i.e., currently running) computer program is loaded and executed on a computer system. The method may include allocating a module thread local storage (TLS) block for each thread within an initial computer program, wherein the allocated module TLS blocks are large enough to hold all module thread variables that are loaded or to be loaded. The method further may include providing constant offsets between module TLS block pointers corresponding to the module TLS blocks and the module thread variables for all of the threads. The disclosed method may be used to add modules to the original computer program and/or to apply a concurrent patch by replacing one or more of the plurality of original computer program modules.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: February 27, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Angel Nunez Mencias, Albert Schirmer, Christine Axnix, Stefan Usenbinz
  • Patent number: 9894002
    Abstract: Techniques are described for applying double experimental (EXP) quality of service (QoS) markings to Multiprotocol Label Switching (MPLS) packets. According to the techniques, an edge router of an MPLS network is configured to map a Differentiated Services Code Point (DSCP) marking for customer traffic to at least two EXP fields of at least two different labels included in a MPLS packet encapsulating the customer traffic. In this way, the edge router may map the full DSCP marking across the first and second EXP fields to provide full resolution QoS for the customer traffic over the MPLS network. The techniques also include a core router of an MPLS network configured to identify a QoS profile for a received MPLS packet based on a combination of a first EXP field of a first label and a second EXP field of a second label included in the MPLS packet.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: February 13, 2018
    Assignee: Juniper Networks, Inc.
    Inventors: Mahesh Narayanan, Nayan S. Patel, Vidur Gupta
  • Patent number: 9804800
    Abstract: A computer is protected from heap spray attacks by identifying blocks in a heap memory, associating the blocks in buckets according to the block sizes, selecting one of the buckets, and choosing a first block and a second block from the selected bucket. The method is further carried out by making a content comparison of the first block with the second block, accumulating a positive result when the comparison meets a predetermined criterion of similarity, and reporting a heap spray detection when accumulated positive results exceed a predetermined threshold.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: October 31, 2017
    Assignee: PALO ALTO NETWORKS, INC.
    Inventors: Alon Livne, Shlomi Levin, Gal Diskin
  • Patent number: 9734059
    Abstract: A method of way prediction for a data cache having a plurality of ways is provided. Responsive to an instruction to access a stack data block, the method accesses identifying information associated with a plurality of most recently accessed ways of a data cache to determine whether the stack data block resides in one of the plurality of most recently accessed ways of the data cache, wherein the identifying information is accessed from a subset of an array of identifying information corresponding to the plurality of most recently accessed ways; and when the stack data block resides in one of the plurality of most recently accessed ways of the data cache, the method accesses the stack data block from the data cache.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: August 15, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lena E. Olson, Yasuko Eckert, Vilas K. Sridharan, James M. O'Connor, Mark D. Hill, Srilatha Manne
  • Patent number: 9690589
    Abstract: An instruction set architecture (ISA) includes instructions for selectively indicating last-use architected operands having values that will not be accessed again, wherein architected operands are made active or inactive after an instruction specified last-use by an instruction, wherein the architected operands are made active by performing a write operation to an inactive operand, wherein the activation/deactivation may be performed by the instruction having the last-use of the operand or another (prefix) instruction.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: June 27, 2017
    Assignee: International Business Machines Corporation
    Inventors: Michael K Gschwind, Valentina Salapura
  • Patent number: 9658823
    Abstract: Systems and methods for system for source-to-source transformation for optimizing stacks and/or queues in an application, including identifying usage of stacks and queues in the application and collecting the resource usage and thread block configurations for the application. If the usage of stacks is identified, optimized code is generated by determining appropriate storage, partitioning stacks based on determined storage, and caching tops of the stacks in a register. If the identifier identifies usage of queues, optimized code is generated by combining queue operations in all threads in a warp/thread block into one batch queue operation, converting control divergence of the application to data divergence to enable warp-level queue operations, determining whether at least one of the threads includes a queue operation, and combining queue operations into threads in a warp.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: May 23, 2017
    Assignee: NEC Corporation
    Inventors: Yi Yang, Min Feng, Srimat Chakradhar
  • Patent number: 9582274
    Abstract: Corruption of call stacks is detected by using guard words placed in the call stacks. A store guard word instruction is used to store a guard word on a stack frame of a caller routine, and a verify guard word instruction issued by one or more callee routines is used to verify the guard word is an expected value. If the guard word is an unexpected value, corruption is indicated.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: February 28, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 9569367
    Abstract: Exemplary methods for improving cache utilization include in response to receiving a request to store data, storing the data in one of a plurality of cache slots of a cache. In one embodiment, the methods further include after storing the data, setting a status of the cache slot as write pending to indicate that the cache slot contains data which needs to be written to a corresponding destination storage device. The methods include determining an eviction type of the cached data based on whether the destination storage device is a local storage device or a remote storage device. In one embodiment, after copying data from the cache slot to the corresponding destination storage device, marking the cache slot with the determined eviction type. In response to receiving another request to store data, evicting at least one of the cache slots based on the eviction type.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: February 14, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Ian Wigmore, Marik Marshak, Arieh Don, Alexandr Veprinsky
  • Patent number: 9513911
    Abstract: A method of detecting stack overflows includes the following steps: storing in at least one dedicated register at least one data item chosen from: a data item (SPHaut) indicating a maximum permitted value for a stack pointer, and a data item (SPBas) indicating a minimum permitted value for said stack pointer; effecting a comparison between a current value (SP) or past value (SPMin, SPMax) of said stack pointer and said data item or each of said data items; and generating a stack overflow exception if said comparison indicates that said current or past value of said stack pointer is greater than said maximum permitted value or less than said minimum permitted value. A processor for implementing such a method is also provided.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: December 6, 2016
    Assignee: Thales
    Inventors: Philippe Grossi, Dominique David, Francois Brun
  • Patent number: 9483318
    Abstract: Technologies are generally described for methods and systems effective to execute a program in a multi-core processor. In an example, methods to execute a program in a multi-core processor may include executing a first procedure on a first core of a multi-core processor. The methods may further include while executing the first procedure, sending a first and second instruction, from the first core to a second and third core, respectively. The instructions may command the cores to execute second and third procedures. The methods may further include executing the first procedure on the first core while executing the second procedure on the second core and executing the third procedure on the third core.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: November 1, 2016
    Assignee: Empire Technology Development LLC
    Inventor: Sriram Vajapeyam
  • Patent number: 9460048
    Abstract: A method and apparatus for creating and executing a packet of chained instructions in a processor. A first instruction specifies a first operand is to be accessed from a memory and delivered through a first path in a first network to a first output. A second instruction specifies the first operand is to be received from the first output, to operate on the first operand, and to generate a result delivered to a second output. The second instruction does not identify a source device for the first operand and a destination device for the result. A third instruction specifies the first result is to be received from the second output and delivered through a first path in a second network for storage in the memory. The first, second, and third instructions are paired together as a packet of chained instructions for execution by a processor.
    Type: Grant
    Filed: March 9, 2013
    Date of Patent: October 4, 2016
    Inventor: Gerald George Pechanek
  • Patent number: 9448844
    Abstract: Provided is a computing system having a hierarchical memory structure. When a data structure is allocated with respect to a task processed in the computing system, the data structure is divided and a portion of the data structure is allocated to a high speed memory of the hierarchical memory structure and a remaining data structure is allocated to a low speed memory of the hierarchical memory.
    Type: Grant
    Filed: September 14, 2010
    Date of Patent: September 20, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae Don Lee, Shi Hwa Lee, Seung Won Lee, Chae Seok Lim, Min Kyu Jeong
  • Patent number: 9390264
    Abstract: Techniques for protecting contents of a stack associated with a processor are provided. The techniques include a method including receiving a store instruction from a software program being executed by the processor, the store instruction including control information associated with a subroutine, altering the control information to generate secured control information responsive to receiving the store instruction from the software program, storing the secured control information on the stack, receiving a load instruction from the software program; and responsive to receiving the load instruction from the software program, loading the secured control information from the stack, altering the secured control information to recover the control information, and returning the control information to the software program.
    Type: Grant
    Filed: April 18, 2014
    Date of Patent: July 12, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Can Erkin Acar, Erich James Plondke, Robert J. Turner, Billy B. Brumley
  • Patent number: 9189399
    Abstract: A processor system presented here has a plurality of execution cores and a plurality of stack caches, wherein each of the stack caches is associated with a different one of the execution cores. A method of managing stack data for the processor system is presented here. The method maintains a stack cache manager for the plurality of execution cores. The stack cache manager includes entries for stack data accessed by the plurality of execution cores. The method processes, for a requesting execution core of the plurality of execution cores, a virtual address for requested stack data. The method continues by accessing the stack cache manager to search for an entry of the stack cache manager that includes the virtual address for requested stack data, and using information in the entry to retrieve the requested stack data.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: November 17, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lena E. Olson, Yasuko Eckert, Bradford M. Beckmann
  • Patent number: 9164872
    Abstract: Coding issues that create runtime memory leaks, for example in programs coded in a platform-independent programming language such as Java™, can be isolated at the program code line level. An allocation trace that retains, in active memory, a unique object identifier for each of a plurality of objects instantiated during program execution and an address in the active memory where each object is stored can be created. Memory leak candidates can be identified by directly examining contents of the active memory to identify one or more data structures that are increasing in size over time. The allocation trace can be combined with the identified memory leak candidates to generate information about at least one identified leaking object.
    Type: Grant
    Filed: May 22, 2013
    Date of Patent: October 20, 2015
    Assignee: SAP SE
    Inventor: Martin Moser
  • Patent number: 9128630
    Abstract: A system and method for monitoring a memory stack size is provided, and more particularly, a method for monitoring a memory stack size, whereby the size of a memory stack applied to an operating system of a controller for a vehicle is monitored so that an overflow phenomenon of the memory stack can be prevented. That is, an accurate usage amount of a memory stack of the entire control system for a hybrid vehicle is efficiently and effectively monitored so that, when the control system reaches a risk level of stack overflow, fail-safe logic is executed and an overflow phenomenon of the memory stack as a result can be prevented.
    Type: Grant
    Filed: November 25, 2013
    Date of Patent: September 8, 2015
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Ji Yong Park, Ui Jung Jung
  • Patent number: 9130820
    Abstract: An application framework including different application programming interfaces (APIs) is described which performs a variety of mobile device functions in response to API calls from applications. For example, in response to relatively simple API calls made by applications the application framework manages the complex tasks associated with invitations and matchmaking. By way of example, the details of complex transactions such as establishing peer-to-peer connections between mobile devices may be transparent to the application developer, thereby simplifying the application design process. The application framework may include an application daemon for communicating with a first set of services and an applications services module for communicating with a separate set of services. The application framework may also include a cache for caching data for each of the services based on different cache management policies driven by each of the services.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: September 8, 2015
    Assignee: Apple Inc.
    Inventors: Mike Lampell, Nathan Taylor, Christina Elizabeth Warren, Francois-Yves Bertrand, Gabriel Belinsky, Alan Dale Berfield
  • Publication number: 20150113356
    Abstract: A system-in-package module with memory includes a non-memory chip, a substrate, and a memory chip. The non-memory chip has a first portion and a second portion. The substrate has a window and the substrate is electrically connected to the second portion of the non-memory chip. The memory chip is placed into the window of the substrate to electrically connect the first portion of the non-memory chip, and there is no direct metal connection between the memory chip and the substrate.
    Type: Application
    Filed: October 23, 2014
    Publication date: April 23, 2015
    Inventors: Weng-Dah Ken, Chao-Chun Lu
  • Publication number: 20150106569
    Abstract: By arranging dies in a stack such that failed cores are aligned with adjacent good cores, fast connections between good cores and cache of failed cores can be implemented. Cache can be allocated according to a priority assigned to each good core, by latency between a requesting core and available cache, and/or by load on a core.
    Type: Application
    Filed: October 10, 2013
    Publication date: April 16, 2015
    Applicant: International Business Machines Corporation
    Inventors: Edgar R. Cordero, Anand Haridass, Subrat K. Panda, Saravanan Sethuraman, Diyanesh Babu Chinnakkonda Vidyapoornachary
  • Patent number: 8966180
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: February 24, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu