User Data Cache And Instruction Data Cache Patents (Class 711/123)
-
Publication number: 20150089140Abstract: In a write by-peer-reference, a storage device client writes a data block to a target storage device in the storage system by sending a write request to the target storage device, the write request specifying information used to obtain the data block from a source storage device in the storage system. The target storage device sends a read request to the source storage device for the data block. The source storage device sends the data block to the target storage device, which then writes the data block to the target storage device. The data block is thus written to the target storage device without the storage device client transmitting the data block itself to the target storage device.Type: ApplicationFiled: September 18, 2014Publication date: March 26, 2015Inventors: Vijay Sridharan, Alexander Tsukerman, Jia Shi, Kothanda Umamageswaran
-
Publication number: 20150081975Abstract: Method, process, and apparatus to efficiently store, read, and/or process syllables of word data. A portion of a data word, which includes multiple syllables, may be read by a computer processor. The processor may read a first syllable of the data word from a first memory. The processor may read a second syllable of the data word from a second portion of memory. The second syllable may include bits which are less critical than the bits of the first syllable. The second memory may be distinct from the first memory based on one or more physical attributes.Type: ApplicationFiled: November 26, 2014Publication date: March 19, 2015Inventors: Lutz Naethke, Axel Borkowski, Bert Bretschneider, Kyriakos A. Stavrou, Rainer Theuer
-
Patent number: 8977816Abstract: A cache and disk management method is provided. In the cache and disk management method, a command to delete all valid data stored in a cache, or specific data corresponding to a part of the valid data may be transmitted to a plurality of member disks. That is, all of the valid data or the specific data may exist in the cache only, and may be deleted from the plurality of member disks. Accordingly, the plurality of member disks may secure more space, an internal copy overhead may be reduced, and more particularly, solid state disks may achieve better performance.Type: GrantFiled: December 23, 2009Date of Patent: March 10, 2015Assignee: OCZ Storage Solutions Inc.Inventor: Soo Gil Jeong
-
Patent number: 8977815Abstract: A processing pipeline 6, 8, 10, 12 is provided with a main query stage 20 and a fetch stage 22. A buffer 24 stores program instructions which have missed within a cache memory 14. Query generation circuitry within the main query stage 20 and within a buffer query stage 26 serve to concurrently generate a main query request and a buffer query request sent to the cache memory 14. The cache memory returns a main query response and a buffer query response. Arbitration circuitry 28 controls multiplexers 30, 32 and 34 to direct the program instruction at the main query stage 20, and the program instruction stored within the buffer 24 and the buffer query stage 26 to pass either to the fetch stage 22 or to the buffer 24. The multiplexer 30 can also select a new instruction to be passed to the main query stage 20.Type: GrantFiled: November 29, 2010Date of Patent: March 10, 2015Assignee: ARM LimitedInventors: Frode Heggelund, Rune Holm, Andreas Due Engh-Halstvedt, Edvard Feilding
-
Patent number: 8949367Abstract: Techniques for cooperative storage management are described. According to embodiments described herein, a storage server stores backup data for a plurality of client systems, including a first client system and one or more other client systems. The storage server receives a request from the first client system to store new backup data. In response to the request from the first client system, the storage server determines which backup data to delete to make space for the new backup data based, at least in part, on retention duration goals associated with the one or more other client systems. The retention duration goals indicate that the client desires to be able to recover data at least as old as a specified age. The storage server may also determine which backup data to delete based, at least in part, on respective minimum space parameter values for the other client systems.Type: GrantFiled: October 31, 2011Date of Patent: February 3, 2015Assignee: Oracle International CorporationInventors: Steven Wertheimer, Muthu Olagappan, Raymond Guzman, William Fisher, Anand Beldalker, Venky Nagaraja Rao, Chris Plakyda, Debjyoti Roy, Senad Dizdar
-
Patent number: 8909867Abstract: The present invention provides a method and apparatus for allocating space in a unified cache. The method may include partitioning the unified cache into a first portion of lines that only store copies of instructions retrieved from a memory and a second portion of lines that only store copies of data retrieved from the memory.Type: GrantFiled: August 24, 2010Date of Patent: December 9, 2014Assignee: Advanced Micro Devices, Inc.Inventor: William L. Walker
-
Patent number: 8898181Abstract: Technologies are described herein for integrating external data from an external system into a client system. A subscription filed is selected. The subscription filed may include a read method and a query method. The read method may define fields of a client cache operating on the client system. The query method may be executed to retrieve, from the external system, field values corresponding to at least a subset of the fields. Upon executing the query method, the read method may also be executed to retrieve, from the external system, additional field values corresponding to a remaining subset of the fields that were not retrieved by executing the query method. The client cache is populated with the field values and the additional field values according to the fields.Type: GrantFiled: June 22, 2010Date of Patent: November 25, 2014Assignee: Microsoft CorporationInventors: David Koronthaly, Rolando Jimenez-Salgado, Sundaravadivelan Paranthaman, Arshish Cyrus Kapadia, Wei-Lun Lo
-
Publication number: 20140337581Abstract: A system and method for efficient scheduling of dependent load instructions. A processor includes both an execution core and a scheduler that issues instructions to the execution core. The execution core includes a load-store unit (LSU). The scheduler determines a first condition is satisfied, wherein the first condition comprises result data for a first load instruction is predicted eligible for LSU-internal forwarding. The scheduler determines a second condition is satisfied, wherein the second condition comprises a second load instruction younger in program order than the first load instruction is dependent on the first load instruction. In response to each of the first condition and the second condition being satisfied, the scheduler can issue the second load instruction earlier than it otherwise would. The LSU internally forwards the received result data from the first load instruction to address generation logic for the second load instruction.Type: ApplicationFiled: May 9, 2013Publication date: November 13, 2014Applicant: Apple Inc.Inventor: Stephan G. Meier
-
Patent number: 8886756Abstract: A user equipment (UE) and an application server exchange data in a wireless communications network. The UE is configured to connect to both a wireless local area network (WLAN) and a wireless wide area network (WWAN). The application server is positioned within the WWAN behind a WWAN firewall. The WLAN includes a WLAN network address translation (NAT) component and firewall such that the UE and the application server do not have a persistent data connection over the WLAN. In an embodiment, the application server can open a WWAN firewall to permit uploads from the UE over the WLAN. In another embodiment, the UE can open the WLAN firewall and/or NAT to permit downloads from the application server over the WLAN. In another embodiment, the application server or UE can upload files to a server outside the WLAN and WWAN firewalls and send a link to the uploaded files for retrieval.Type: GrantFiled: May 13, 2011Date of Patent: November 11, 2014Assignee: QUALCOMM IncorporatedInventors: Kirankumar Anchan, Bongyong Song
-
Publication number: 20140310469Abstract: An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message.Type: ApplicationFiled: April 11, 2013Publication date: October 16, 2014Applicant: Apple Inc.Inventors: Erik P. Machnicki, Harshavardhan Kaushikkar, Shinye Shiu
-
Publication number: 20140304474Abstract: The described embodiments comprise a computing device with a first processor core and a second processor core. In some embodiments, during operations, the first processor core receives, from the second processor core, an indication of a memory location and a flag. The first processor core then stores the flag in a first cache line in a cache in the first processor core and stores the indication of the memory location separately in a second cache line in the cache. Upon encountering a predetermined result when evaluating a condition for the indicated memory location, the first processor core updates the flag in the first cache line. Based on the update of the flag, the first processor core causes the second processor core to perform an operation.Type: ApplicationFiled: April 4, 2013Publication date: October 9, 2014Applicant: Advanced Micro Devices, Inc.Inventors: Steven K. Reinhardt, Marc S. Orr, Bradford M. Beckmann
-
Publication number: 20140289471Abstract: A coherence decoupling buffer. In accordance with a first embodiment, a coherence decoupling buffer is for storing tag information of cache lines evicted from a plurality of cache memories. A coherence decoupling buffer may be free of value information of the plurality of cache memories. A coherence decoupling buffer may also be combined with a coherence memory.Type: ApplicationFiled: June 3, 2014Publication date: September 25, 2014Applicant: Intellectual Venture Funding LLCInventor: Guillermo J. Rozas
-
Publication number: 20140281245Abstract: Execution of a store instruction to modify an instruction at a memory location identified by a memory address is requested. A cache controller stores the memory address and the modified data in an associative memory coupled to a data cache and an instruction cache. In addition, the modified data is stored in a second level cache without invalidating the memory location associated with the instruction cache.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Wen-Tzer Thomas Chen, JR., Robert H. Bell, JR., Bradly G. Frey
-
Publication number: 20140281244Abstract: An exemplary storage apparatus of the invention includes storage devices for storing data of block I/O commands and file I/O commands and a controller including a block cache area and a file cache area. The controller creates block I/O commands from file I/O commands and accesses the storage devices in accordance with the created block I/O commands. In a case where the file cache area is lacking an area to cache first data of a received first file I/O command, the controller chooses one of a first cache method that newly creates a free area in the file cache area to cache the first data in the file cache area and a second cache method that caches the first data in the block cache area without caching the first data in the file cache area.Type: ApplicationFiled: November 14, 2012Publication date: September 18, 2014Applicant: HITACHI, LTD.Inventors: Shintaro Kudo, Yusuke Nonaka, Masanori Takada
-
Patent number: 8839025Abstract: The systems and methods described herein may provide a flush-retire instruction for retiring “bad” cache locations (e.g., locations associated with persistent errors) to prevent their allocation for any further accesses, and a flush-unretire instruction for unretiring cache locations previously retired. These instructions may be implemented as hardware instructions of a processor. They may be executable by processes executing in a hyper-privileged state, without the need to quiesce any other processes. The flush-retire instruction may atomically flush a cache line implicated by a detected cache error and set a lock bit to disable subsequent allocation of the corresponding cache location. The flush-unretire instruction may atomically flush an identified cache line (if valid) and clear the lock bit to re-enable subsequent allocation of the cache location. Various bits in the encodings of these instructions may identify the cache location to be retired or unretired in terms of the physical cache structure.Type: GrantFiled: September 30, 2011Date of Patent: September 16, 2014Assignee: Oracle International CorporationInventors: Ramaswamy Sivaramakrishnan, Ali Vahidsafa, Aaron S. Wynn, Connie W. Cheung
-
Patent number: 8812808Abstract: A counter architecture and a corresponding method are provided for estimating a profitability value of DVFS for a unit of work running on a computing device. The counter architecture and the corresponding method are arranged for dividing total execution time for executing a unit of work on the computing device into a pipelined fraction subject to clock frequency and a non-pipelined fraction due to off-chip memory accesses, and for estimating the DVFS profitability value from the pipelined and the non-pipelined fraction.Type: GrantFiled: December 10, 2010Date of Patent: August 19, 2014Assignee: Universiteit GentInventors: Stijn Eyerman, Lieven Eeckhout
-
Patent number: 8806136Abstract: A Not and Flash (Nandflash) controller and a data transmission method with the Nandflash controller are provided. The Nandflash controller includes a parameter configuration device, configured to receive an operation command from outside, wherein the operation command indicates a current transmission type, number of times needed for transmitting data, size of which is same as that of a buffer in the Nandflash, and command parameters used by each execution; a transmission controlling device, configured to transmit data of a precoded size to/from the Nandflash during each data transmission according to the current transmission type and the command parameters used by this execution the number of times indicated by the operation command. The controller and method advantageously avoid configuring a command for the next operation each time the data of the precoded size is transmitted, save time and clock resources, and greatly improves transmission efficiency.Type: GrantFiled: July 7, 2011Date of Patent: August 12, 2014Assignee: Shanghai Actions Semiconductor Co., Ltd.Inventors: Yong Zhang, Jiangxun Tang
-
Patent number: 8775555Abstract: Various embodiments of systems and methods for REST interface interaction with expectation management are described herein. A message request is received for accessing content of a resource. Further, a check is made to determine whether the message request includes a structure-expected in a header of the message request. Also, a check is made to determine whether the structure-expected matches with a structure of the resource if the message request includes the structure-expected. Furthermore, the message request is executed if the structure-expected matches with the resource structure. Then, a message response is returned with a structure-resulted in a header of the message response based on the execution of the message request. If the message request does not include the structure-expected, the message request is executed and the message response is returned with the structure-resulted based on the execution of the message request.Type: GrantFiled: May 13, 2011Date of Patent: July 8, 2014Assignee: SAP AGInventor: Sebastian Steinhauer
-
Publication number: 20140156933Abstract: Systems, apparatuses, and methods for improving TM throughput using a TM region indicator (or color) are described. Through the use of TM region indicators younger TM regions can have their instructions retired while waiting for older TM regions to commit.Type: ApplicationFiled: November 30, 2012Publication date: June 5, 2014Inventors: Omar M. Shaikh, Ravi Rajwar, Paul Caprioli, Muawya M. Al-Otoom
-
Publication number: 20140156934Abstract: A storage apparatus includes controller modules configured to have a cache memory and to control a storage device, respectively, and communication channels that connect the controller modules in a mesh topology, where one controller module providing an instruction to perform data transfer in which the controller module is specified as a transfer source and another controller module is specified as a transfer destination. The instruction is provided to a controller module directly connected to the other controller modules using a corresponding one of the communication channels, and configured to perform data transfer from the cache memory of the one controller module to the cache memory of the other controller module, in accordance with the instruction.Type: ApplicationFiled: September 19, 2013Publication date: June 5, 2014Applicant: FUJITSU LIMITEDInventor: Sadayuki Ohyama
-
Patent number: 8700856Abstract: According to a prior art storage subsystem, shared memories are mirrored in main memories of two processors providing redundancy. When the consistency of writing order of data is not ensured among mirrored shared memories, the processors must read only one of the mirrored shared memories to have the write order of the read data correspond among the two processors. As a result, upon reading data from the shared memories, it is necessary for a processor to read data from the main memory of the other processor, so that the overhead is increased compared to the case where the respective processors read their respective main memories. According to the storage subsystem of the present invention, a packet redirector having applied a non-transparent bridge enables to adopt a PCI Express multicast to the writing of data from the processor to the main memory, so that the order of writing data into the shared memories can be made consistent among the mirrored memories.Type: GrantFiled: March 23, 2012Date of Patent: April 15, 2014Assignee: Hitachi, Ltd.Inventors: Katsuya Tanaka, Masanori Takada, Shintaro Kudo
-
Publication number: 20140095793Abstract: An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively.Type: ApplicationFiled: December 6, 2013Publication date: April 3, 2014Applicant: HITACHI, LTD.Inventors: Atushi ISHIKAWA, Yuko Matsui
-
Patent number: 8687008Abstract: A latency tolerant system for executing video processing operations. The system includes a host interface for implementing communication between the video processor and a host CPU, a scalar execution unit coupled to the host interface and configured to execute scalar video processing operations, and a vector execution unit coupled to the host interface and configured to execute vector video processing operations. A command FIFO is included for enabling the vector execution unit to operate on a demand driven basis by accessing the memory command FIFO. A memory interface is included for implementing communication between the video processor and a frame buffer memory. A DMA engine is built into the memory interface for implementing DMA transfers between a plurality of different memory locations and for loading the command FIFO with data and instructions for the vector execution unit.Type: GrantFiled: November 4, 2005Date of Patent: April 1, 2014Assignee: NVIDIA CorporationInventors: Ashish Karandikar, Shirish Gadre, Stephen D. Lew
-
Patent number: 8681169Abstract: Systems and methods for texture processing are presented. In one embodiment a texture method includes creating a sparse texture residency translation map; performing a probe process utilizing the sparse texture residency translation map information to return a finest LOD that contains the texels for a texture lookup operation; and performing the texture lookup operation utilizing the finest LOD. In one exemplary implementation, the finest LOD is utilized as a minimum LOD clamp during the texture lookup operation. A finest LOD number indicates a minimum resident LOD and a sparse texture residency translation map includes one finest LOD number per tile of a sparse texture. The sparse texture residency translation can indicate a minimum resident LOD.Type: GrantFiled: December 31, 2009Date of Patent: March 25, 2014Assignee: Nvidia CorporationInventors: Jesse D. Hall, Jerome F. Duluk, Jr., Andrew Tao, Henry Moreton
-
Publication number: 20140082288Abstract: A network attached storage (NAS) caching appliance, system, and associated method of operation for caching a networked file system. Still further, some embodiments provide for a cache system that implements a mufti-tiered, policy-influenced block replacement algorithm.Type: ApplicationFiled: September 18, 2013Publication date: March 20, 2014Applicant: NetApp, Inc.Inventors: Derek Beard, Ghassan Yammine, Gregory Dahl
-
Publication number: 20140047185Abstract: In one embodiment, a computing system includes a cache and a cache manager. The cache manager is able to receive data, write the data to a first portion of the cache, write the data to a second portion of the cache, and delete the data from the second portion of the cache when the data in the first portion of the cache is flushed.Type: ApplicationFiled: August 7, 2012Publication date: February 13, 2014Applicant: DELL PRODUCTS L.P.Inventors: Scott David Peterson, Phillip E. Krueger
-
Publication number: 20130339610Abstract: Embodiments relate to tracking cache lines. An aspect of embodiments includes performing an operation by a processor. Another aspect of embodiments includes fetching a cache line based on the operation. Yet another aspect of embodiments includes storing in an instruction address register file at least one of (i) an operation identifier identifying the operation and (ii) a memory location identifier identifying a level of memory from which the cache line is populated.Type: ApplicationFiled: June 14, 2012Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Adam B. Collura, Brian R. Prasky
-
Patent number: 8595439Abstract: A device may execute application code in a first cache environment to obtain a first result. The first cache environment may be based on a first cache configuration that is associated with the application code. The device may determine a second cache configuration based on the first result and execute the application code in a second cache environment to obtain a second result. The second cache environment may be based on the second cache configuration. The device may select one of the first cache configuration or the second cache configuration as a selected cache configuration for the application code based on comparing the first result with the second result, and may configure the one or more caches based on the selected one of the first cache configuration or the second cache configuration.Type: GrantFiled: April 17, 2012Date of Patent: November 26, 2013Assignee: The MathWorks, Inc.Inventors: David Koh, Murat Belge, James K. Weixel
-
Patent number: 8549208Abstract: A cache memory having enhanced performance and security feature is provided. The cache memory includes a data array storing a plurality of data elements, a tag array storing a plurality of tags corresponding to the plurality of data elements, and an address decoder which permits dynamic memory-to-cache mapping to provide enhanced security of the data elements, as well as enhanced performance. The address decoder receives a context identifier and a plurality of index bits of an address passed to the cache memory, and determines whether a matching value in a line number register exists. The line number registers allow for dynamic memory-to-cache mapping, and their contents can be modified as desired. Methods for accessing and replacing data in a cache memory are also provided, wherein a plurality of index bits and a plurality of tag bits at the cache memory are received.Type: GrantFiled: December 8, 2009Date of Patent: October 1, 2013Assignee: Teleputers, LLCInventors: Ruby B. Lee, Zhenghong Wang
-
Publication number: 20130254487Abstract: According to a prior art storage subsystem, shared memories are mirrored in main memories of two processors providing redundancy. When the consistency of writing order of data is not ensured among mirrored shared memories, the processors must read only one of the mirrored shared memories to have the write order of the read data correspond among the two processors. As a result, upon reading data from the shared memories, it is necessary for a processor to read data from the main memory of the other processor, so that the overhead is increased compared to the case where the respective processors read their respective main memories. According to the storage subsystem of the present invention, a packet redirector having applied a non-transparent bridge enables to adopt a PCI Express multicast to the writing of data from the processor to the main memory, so that the order of writing data into the shared memories can be made consistent among the mirrored memories.Type: ApplicationFiled: March 23, 2012Publication date: September 26, 2013Inventors: Katsuya Tanaka, Masanori Takada, Shintaro Kudo
-
Patent number: 8533422Abstract: An apparatus of an aspect includes a prefetch cache line address predictor to receive a cache line address and to predict a next cache line address to be prefetched. The next cache line address may indicate a cache line having at least 64-bytes of instructions. The prefetch cache line address predictor may have a cache line target history storage to store a cache line target history for each of multiple most recent corresponding cache lines. Each cache line target history may indicate whether the corresponding cache line had a sequential cache line target or a non-sequential cache line target. The cache line address predictor may also have a cache line target history predictor. The cache line target history predictor may predict whether the next cache line address is a sequential cache line address or a non-sequential cache line address, based on the cache line target history for the most recent cache lines.Type: GrantFiled: September 30, 2010Date of Patent: September 10, 2013Assignee: Intel CorporationInventors: Samantika Subramaniam, Aamer Jaleel, Simon C. Steely, Jr.
-
Publication number: 20130219123Abstract: A multi-core processor comprises a level 1 (L1) cache and two independent processor cores each sharing the L1 cache.Type: ApplicationFiled: December 13, 2012Publication date: August 22, 2013Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: SAMSUNG ELECTRONICS CO., LTD.
-
Publication number: 20130198458Abstract: In one embodiment, a processor can operate in multiple modes, including a direct execution mode and an emulation execution mode. More specifically, the processor may operate in a partial emulation model in which source instruction set architecture (ISA) instructions are directly handled in the direct execution mode and translated code generated by an emulation engine is handled in the emulation execution mode. Embodiments may also provide for efficient transitions between the modes using information that can be stored in one or more storages of the processor and elsewhere in a system. Other embodiments are described and claimed.Type: ApplicationFiled: March 5, 2013Publication date: August 1, 2013Inventors: SEBASTIAN WINKEL, KOICHI YAMADA, SURESH SRINIVAS, JAMES E. SMITH
-
Patent number: 8499302Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.Type: GrantFiled: September 6, 2011Date of Patent: July 30, 2013Assignee: NetLogic Microsystems, Inc.Inventor: David T. Hass
-
Publication number: 20130191597Abstract: A cache device may include a first storage unit configured to store first cache data, a second storage unit configured to store a cache file that stores copy of the first cache data as second cache data; a reading unit configured to select and read out one of the first cache data, which has been stored in the first storage unit, and the second cache data, which has been stored in the cache file stored in the second storage unit, in response to a reference request from outside, and an instructing unit configured to determine probability of expecting future referencing request based on the frequency of past referencing requests, the instructing unit being configured to instruct that either the first cache data or the second cache data is to be selected and read out based on the probability.Type: ApplicationFiled: January 16, 2013Publication date: July 25, 2013Applicant: YOKOGAWA ELECTRIC CORPORATIONInventor: YOKOGAWA ELECTRIC CORPORATION
-
Patent number: 8458403Abstract: A cache system to compare memory transactions while facilitating checkpointing and rollback is provided. The system includes at least one processor core including at least one cache operating in write-through mode, at least two checkpoint caches operating in write-back mode, a comparison/checkpoint logic, and a main memory. The at least two checkpoint caches are communicatively coupled to the at least one cache operating in write-through mode. The comparison/checkpoint logic is communicatively coupled to the at least two checkpoint caches. The comparison/checkpoint logic compares memory transactions stored in the at least two checkpoint caches responsive to an initiation of a checkpointing. The main memory is communicatively coupled to at least one of the at least two checkpoint caches.Type: GrantFiled: November 24, 2009Date of Patent: June 4, 2013Assignee: Honeywell International Inc.Inventors: David J. Kessler, David R. Bueno, David Paul Campagna
-
Patent number: 8433850Abstract: Methods and apparatus for instruction restarts and inclusion in processor micro-op caches are disclosed. Embodiments of micro-op caches have way storage fields to record the instruction-cache ways storing corresponding macroinstructions. Instruction-cache in-use indications associated with the instruction-cache lines storing the instructions are updated upon micro-op cache hits. In-use indications can be located using the recorded instruction-cache ways in micro-op cache lines. Victim-cache deallocation micro-ops are enqueued in a micro-op queue after micro-op cache miss synchronizations, responsive to evictions from the instruction-cache into a victim-cache. Inclusion logic also locates and evicts micro-op cache lines corresponding to the recorded instruction-cache ways, responsive to evictions from the instruction-cache.Type: GrantFiled: December 2, 2008Date of Patent: April 30, 2013Assignee: Intel CorporationInventors: Lihu Rappoport, Chen Koren, Franck Sala, Ilhyun Kim, Lior Libis, Ron Gabor, Oded Lempel
-
Patent number: 8417890Abstract: A method, system, and computer program product for managing cache coherency for self-modifying code in an out-of-order execution system are disclosed. A program-store-compare (PSC) tracking manager identifies a set of addresses of pending instructions in an address table that match an address requested to be invalidated by a cache invalidation request. The PSC tracking manager receives a fetch address register identifier associated with a fetch address register for the cache invalidation request. The fetch address register is associated with the set of addresses and is a PSC tracking resource reserved by a load store unit (LSU) to monitor an exclusive fetch for a cache line in a high level cache. The PSC tracking manager determines that the set of entries in an instruction line address table associated with the set of addresses is invalid and instructs the LSU to free the fetch address register.Type: GrantFiled: June 9, 2010Date of Patent: April 9, 2013Assignee: International Business Machines CorporationInventors: Christian Jacobi, Brian R. Prasky, Aaron Tsai
-
Publication number: 20130046937Abstract: A computer implemented method for use by a transaction program for managing memory access to a shared memory location for transaction data of a first thread, the shared memory location being accessible by the first thread and a second thread. A string of instructions to complete a transaction of the first thread are executed, beginning with one instruction of the string of instructions. It is determined whether the one instruction is part of an active atomic instruction group (AIG) of instructions associated with the transaction of the first thread. A cache structure and a transaction table which together provide for entries in an active mode for the AIG are located if the one instruction is part of an active AIG. The next instruction is executed under a normal execution mode in response to determining that the one instruction is not part of an active AIG.Type: ApplicationFiled: October 22, 2012Publication date: February 21, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: International Business Machines Corporation
-
Patent number: 8370850Abstract: A number of applications can be run by the computing system. Such applications can execute independently from each other and can also each independently manage a corresponding set of content stored on a local storage device (LSD). One of the advantages presented by the invention is the ability of the LSD to inform one application of the content made available on the LSD by another one of the applications even though the applications have no relationship to each other. In this way, a synergy between the independent applications can be achieved providing a co-operative environment that can result in, for example, improved operation of the computing system, improved resource (i.e., memory, bandwidth, processing) allocation and use, and other factors.Type: GrantFiled: February 25, 2008Date of Patent: February 5, 2013Assignee: SanDisk IL Ltd.Inventors: Alain Nochimowski, Amir Mosek
-
Patent number: 8364896Abstract: A method of configuring a unified cache includes identifying unified cache way assignment combinations for an application unit. Each combination has an associated error rate. A combination is selected based at least in part on the associated error rate. The unified cache is configured in accordance with the selected combination for execution of the application-unit.Type: GrantFiled: September 20, 2008Date of Patent: January 29, 2013Assignee: Freescale Semiconductor, Inc.Inventor: William C. Moyer
-
Publication number: 20130024619Abstract: A method for translating instructions for a processor. The method includes accessing a guest instruction and performing a first level translation of the guest instruction using a first level conversion table. The method further includes outputting a resulting native instruction when the first level translation proceeds to completion. A second level translation of the guest instruction is performed using a second level conversion table when the first level translation does not proceed to completion, wherein the second level translation further processes the guest instruction based upon a partial translation from the first level conversion table. The resulting native instruction is output when the second level translation proceeds to completion.Type: ApplicationFiled: January 27, 2012Publication date: January 24, 2013Applicant: SOFT MACHINES, INC.Inventor: Mohammad Abdallah
-
Patent number: 8352679Abstract: Techniques are generally described for methods, systems, data processing devices and computer readable media configured to decrypt data to be stored in a data cache when a particular condition indicative of user authentication or data security has occurred. The described techniques may also be arranged to terminate the storage of decrypted data in the cache when a particular condition that may compromise the security of the data is detected. The describe techniques may further be arranged to erase the decrypted data stored in the cache when a particular condition that may compromise the security of the data is detected.Type: GrantFiled: April 29, 2009Date of Patent: January 8, 2013Assignee: Empire Technology Development LLCInventors: Andrew Wolfe, Thomas Martin Conte
-
Patent number: 8353026Abstract: A credential caching system includes receiving a set of authentication credentials, storing the set of authentication credentials in a credential cache memory, wherein the credential cache memory is coupled with a management controller, and supplying the set of authentication credentials for automatic authentication during a reset or reboot. In the event of a security breach, the credential caching system clears the set of authentication credentials from the credential cache memory so that the set of authentication credentials may no longer be used for a reset or reboot.Type: GrantFiled: October 23, 2008Date of Patent: January 8, 2013Assignee: Dell Products L.P.Inventors: Muhammed K. Jaber, Mukund P. Khatri, Kevin T. Marks, Don Charles McCall
-
Patent number: 8326824Abstract: A method for estimating contents of a cache determines table descriptors referenced by a query, and scans each page header stored in the cache for the table descriptor. If the table descriptor matches any of the referenced table descriptors, a page count value corresponding to the matching referenced table descriptor is increased. Alternatively, a housekeeper thread periodically performs the scan and stores the page count values in a central lookup table accessible by threads during a query run. Alternatively, each thread independently maintains a hash table with page count entries corresponding to table descriptors for each table in the database system. A thread increases or decreases the page count value when copying or removing pages from the cache. A page count value for each referenced table descriptor is determined from a sum of the values in the hash tables. A master thread performs bookkeeping and prevents hash table overflows.Type: GrantFiled: May 28, 2010Date of Patent: December 4, 2012Assignee: International Business Machines CorporationInventors: Vatsalya Agrawal, Vivek Bhaskar, Saibaba Konduru, Ahmed Shareef
-
Patent number: 8316186Abstract: A method of configuring a cache includes identifying a plurality of cache configurations of a configurable cache for a processor-executable application unit. Each configuration has an associated error rate. A selected configuration is selected based at least in part on the associated error rate. The configurable cache is configured in accordance with the selected configuration for execution of the application-unit.Type: GrantFiled: September 20, 2008Date of Patent: November 20, 2012Assignee: Freescale Semiconductor, Inc.Inventor: William C. Moyer
-
Patent number: 8316187Abstract: Disclosed is a cache memory, design structure, and corresponding method for improving cache performance comprising one or more cache lines of equal size, each cache line adapted to store a cache block of data from a main memory in response to an access request from a processor; and a predict buffer, of size equal to the size of the cache lines, configured to store a next block of data from said main memory in response to a predict-fetch signal generated using at least one previous access request.Type: GrantFiled: July 8, 2008Date of Patent: November 20, 2012Assignee: International Business Machines CorporationInventor: Anil Pothireddy
-
Publication number: 20120272005Abstract: Various embodiments save a plurality of log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. The plurality of blank tapes in the secondary cache is prepared for storing the user data and the plurality of log data based on priorities. At least one of the plurality of blank tapes is selected for copying the plurality of log data and the user data from the primary cache to the secondary cache based on priorities. The plurality of log data is stored in the primary cache. The selection of at least one of the plurality of blank tapes completely filled with the plurality of log data is delayed for writing additional amounts of user data.Type: ApplicationFiled: June 11, 2012Publication date: October 25, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Norie IWASAKI, Koichi MASUDA, Tadaaki MINOURA, Tomokazu NAKAMURA, Takeshi SOHDA, Takahiro TSUDA
-
Patent number: 8261085Abstract: According to some implementations methods, apparatus and systems are provided involving the use of processors having at least one core with a security component, the security component adapted to read and verify data within data blocks stored in a L1 instruction cache memory and to allow the execution of data block instructions in the core only upon the instructions being verified by the use of a cryptographic algorithm.Type: GrantFiled: September 26, 2011Date of Patent: September 4, 2012Assignee: Media Patents, S.L.Inventor: Álvaro Fernández Gutiérrez
-
Patent number: 8190951Abstract: A data processing apparatus includes processing circuitry, a cache storage, and a replicated address storage having a plurality of entries. On detecting a cache record error, a record of a cache location avoid storage is allocated to store a cache record identifier for the accessed cache record. On detection of an entry error, use of the address indication currently stored in that accessed entry of the replicated address storage is prevented, and a command is issued to the cache location avoid storage. In response, a record of the cache location avoid storage is allocated to store the cache record identifier for the cache record of the cache storage associated with the accessed entry of the replicated address storage. Any cache record whose cache record identifier is stored in the cache location avoid storage is logically excluded from the plurality of cache records.Type: GrantFiled: August 20, 2009Date of Patent: May 29, 2012Assignee: ARM LimitedInventor: Damien Rene Gilbert Gille