With Look-ahead Addressing Means (epo) Patents (Class 711/E12.004)
  • Patent number: 11921561
    Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: March 5, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11914520
    Abstract: A determination can be made of a type of memory access workload for an application. A determination can be made whether the memory access workload for the application is associated with sequential read operations. The data associated with the application can be stored at one of a cache of a first type or another cache of a second type based on the determination of whether the memory workload for the application is associated with sequential read operations.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: February 27, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Dhawal Bavishi
  • Patent number: 11593001
    Abstract: A VPU and associated components include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators are used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer is included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU executes a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 28, 2023
    Assignee: NVIDIA Corporation
    Inventors: Ching-Yu Hung, Ravi P Singh, Jagadeesh Sankaran, Yen-Te Shih, Ahmad Itani
  • Patent number: 11373727
    Abstract: Logic (apparatus and/or software) is provided that separates read and restore operations. When a read is completed, the read data is stored in a restore buffer allowing other latency critical operations such as reads to be serviced before the restore. Deferring restore operations minimizes latency and burst bandwidth for reads and minimizes the performance impact of the non-critical restore operations.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: June 28, 2022
    Assignee: Kepler Computing Inc.
    Inventors: Christopher B. Wilkerson, Rajeev Kumar Dokania, Sasikanth Manipatruni, Amrita Mathuriya
  • Patent number: 11373728
    Abstract: Logic (apparatus and/or software) is provided that separates read and restore operations. When a read is completed, the read data is stored in a restore buffer allowing other latency critical operations such as reads to be serviced before the restore. Deferring restore operations minimizes latency and burst bandwidth for reads and minimizes the performance impact of the non-critical restore operations.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: June 28, 2022
    Assignee: Kepler Computing Inc.
    Inventors: Christopher B. Wilkerson, Rajeev Kumar Dokania, Sasikanth Manipatruni, Amrita Mathuriya
  • Patent number: 11366589
    Abstract: Logic (apparatus and/or software) is provided that separates read and restore operations. When a read is completed, the read data is stored in a restore buffer allowing other latency critical operations such as reads to be serviced before the restore. Deferring restore operations minimizes latency and burst bandwidth for reads and minimizes the performance impact of the non-critical restore operations.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: June 21, 2022
    Assignee: Kepler Computing Inc.
    Inventors: Christopher B. Wilkerson, Rajeev Kumar Dokania, Sasikanth Manipatruni, Amrita Mathuriya
  • Patent number: 11157287
    Abstract: A microprocessor system comprises a computational array and a hardware arbiter. The computational array includes a plurality of computation units. Each of the plurality of computation units operates on a corresponding value addressed from memory. The hardware arbiter is configured to control issuing of at least one memory request for one or more of the corresponding values addressed from the memory for the computation units. The hardware arbiter is also configured to schedule a control signal to be issued based on the issuing of the memory requests.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: October 26, 2021
    Assignee: Tesla, Inc.
    Inventors: Emil Talpes, Peter Joseph Bannon, Kevin Altair Hurd
  • Patent number: 8856452
    Abstract: A method and apparatus for prefetching data from memory for a multicore data processor. A prefetcher issues a plurality of requests to prefetch data from a memory device to a memory cache. Consecutive cache misses are recorded in response to at least two of the plurality of requests. A time between the cache misses is determined and a timing of a further request to prefetch data from the memory device to the memory cache is altered as a function of the determined time between the two cache misses.
    Type: Grant
    Filed: May 31, 2011
    Date of Patent: October 7, 2014
    Assignee: Illinois Institute of Technology
    Inventors: Xian-He Sun, Yong Chen, Huaiyu Zhu
  • Patent number: 8775716
    Abstract: A computer-implemented method for defragmenting virtual machine prefetch data. The method may include obtaining prefetch information associated with prefetch data of a virtual machine. The method may also include defragmenting, based on the prefetch information, the prefetch data on physical storage. The prefetch information may include a starting location and length of the prefetch data on a virtual disk. The prefetch information may include a geometry specification of the virtual disk. Defragmenting on physical storage may include placing the prefetch data contiguously on physical storage, placing the prefetch data in a fast-access segment of physical storage, and/or ordering the prefetch data according to the order in which it is accessed at system or application startup.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: July 8, 2014
    Assignee: Symantec Corporation
    Inventors: Randall R. Cook, Brian Hernacki, Sourabh Satish, William E. Sobel
  • Patent number: 8762532
    Abstract: Incoming data frames are parsed by a hardware component. Headers are extracted and stored in a first location along with a pointer to the associated payload. Payloads are stored in a single, contiguous memory location.
    Type: Grant
    Filed: August 13, 2009
    Date of Patent: June 24, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Mathias Kohlenz, Idreas Mir, Irfan Anwar Khan, Madhusudan Sathyanarayan, Shailesh Maheshwari, Srividhya Krishnamoorthy, Sandeep Urgaonkar, Thomas Klingenbrunn, Tim Tynghuei Liou
  • Patent number: 8713260
    Abstract: A method and system may include fetching a first pre-fetched data block having a first length greater than the length of a first requested data block, storing the first pre-fetched data block in a cache, and then fetching a second pre-fetched data block having a second length, greater than the length of a second requested data block, if data in the second requested data block is not entirely stored in a valid part of the cache. The first and second pre-fetched data blocks may be associated with a storage device over a channel. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 2, 2010
    Date of Patent: April 29, 2014
    Assignee: Intel Corporation
    Inventors: Nadim Taha, Hormuzd Khosravi
  • Publication number: 20140108766
    Abstract: A processing unit includes a translation look-aside buffer operable to store a plurality of virtual address translation entries, a prefetch buffer, and logic operable to receive a first virtual address translation associated with a first virtual memory block and a second virtual address translation associated with a second virtual memory block immediately adjacent the first virtual memory block, store the first virtual address translation in the transaction look-aside buffer, and store the second virtual address translation in the prefetch buffer.
    Type: Application
    Filed: October 17, 2012
    Publication date: April 17, 2014
    Inventor: Nischal Desai
  • Patent number: 8683135
    Abstract: Techniques are disclosed relating to prefetching data from memory. In one embodiment, an integrated circuit may include a processor containing an execution core and a data cache. The execution core may be configured to receive an instance of a prefetch instruction that specifies a memory address from which to retrieve data. In response to the instance of the instruction, the execution core retrieves data from the memory address and stores it in the data in the data cache, regardless of whether the data corresponding to that particular memory address is already stored in the data cache. In this manner, the data cache may be used as a prefetch buffer for data in memory buffers where coherence has not been maintained.
    Type: Grant
    Filed: October 31, 2010
    Date of Patent: March 25, 2014
    Assignee: Apple Inc.
    Inventor: Michael Frank
  • Patent number: 8683133
    Abstract: A real request from a CPU to the same memory bank as a prior prefetch request is transmitted to the per-memory bank logic along with a kill signal to terminate the prefetch request. This avoids waiting for a prefetch request to complete before sending the real request to the same memory bank. The kill signal gates off any acknowledgement of completion of the prefetch request. This invention reduces the latency for completion of a high priority real request when a low priority speculative request to a different address in the same memory bank has already been dispatched.
    Type: Grant
    Filed: January 20, 2009
    Date of Patent: March 25, 2014
    Assignee: Texas Instruments Incorporated
    Inventors: Sajish Sajayan, Alok Anand, Ashish Rai Shrivastava, Joseph R. Zbiciak
  • Publication number: 20140075147
    Abstract: A transactional memory (TM) receives an Atomic Look-up, Add and Lock (ALAL) command across a bus from a client. The command includes a first value. The TM pulls a second value. The TM uses the first value to read a set of memory locations, and determines if any of the locations contains the second value. If no location contains the second value, then the TM locks a vacant location, adds the second value to the vacant location, and sends a result to the client. If a location contains the second value and it is not locked, then the TM locks the location and returns a result to the client. If a location contains the second value and it is locked, then the TM returns a result to the client. Each location has an associated data structure. Setting the lock field of a location locks access to its associated data structure.
    Type: Application
    Filed: September 10, 2012
    Publication date: March 13, 2014
    Applicant: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Johann Heinrich Tönsing
  • Patent number: 8667225
    Abstract: A system and method for efficient data prefetching. A data stream stored in lower-level memory comprises a contiguous block of data used in a computer program. A prefetch unit in a processor detects a data stream by identifying a sequence of storage accesses referencing a contiguous blocks of data in a monotonically increasing or decreasing manner. After a predetermined training period for a given data stream, the prefetch unit prefetches a portion of the given data stream from memory without write permission, in response to an access that does not request write permission. Also, after the training period, the prefetch unit prefetches a portion of the given data stream from lower-level memory with write permission, in response to determining there has been a prior access to the given data stream that requests write permission subsequent to a number of cache misses reaching a predetermined threshold.
    Type: Grant
    Filed: September 11, 2009
    Date of Patent: March 4, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Benjamin T. Sander, Bharath Narasimha Swamy, Swamy Punyamurtula
  • Patent number: 8667224
    Abstract: Described are techniques for processing a data operation in a data storage system. A front-end component of the data storage system receives the data operation. In response to receiving the data operation, the front-end component performs first processing. The first processing includes determining whether the data operation is a read operation requesting to read a data portion which results in a cache miss; and if said determining determines that the data operation is a read operation resulting in a cache miss, performing read miss processing. Read miss processing includes sequential stream recognition processing performed by the front-end component to determine whether the data portion is included in a sequential stream.
    Type: Grant
    Filed: December 20, 2007
    Date of Patent: March 4, 2014
    Assignee: EMC Corporation
    Inventors: Rong Yu, Orit Levin-Michael, John W. Lefferts, Pei-Ching Hwang, Peng Yin, Yechiel Yochai, Dan Aharoni, Qun Fan, Stephen R. Ives
  • Publication number: 20130332699
    Abstract: Embodiments relate to target buffer address region tracking. An aspect includes receiving a restart address, and comparing, by a processing circuit, the restart address to a first stored address and to a second stored address. The processing circuit determines which of the first and second stored addresses is identified as a same range and a different range to form a predicted target address range defining an address region associated with an entry in the target buffer. Based on determining that the restart address matches the first stored address, the first stored address is identified as the same range and the second stored address is identified as the different range. Based on determining that the restart address matches the second stored address, the first stored address is identified as the different range and the second stored address is identified as the same range.
    Type: Application
    Filed: June 11, 2012
    Publication date: December 12, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James J. Bonanno, Brian R. Prasky, Aaron Tsai
  • Publication number: 20130318306
    Abstract: A method and system for implementing vector prefetch with streaming access detection is contemplated in which an execution unit such as a vector execution unit, for example, executes a vector memory access instruction that references an associated vector of effective addresses. The vector of effective addresses includes a number of elements, each of which includes a memory pointer. The vector memory access instruction is executable to perform multiple independent memory access operations using at least some of the memory pointers of the vector of effective addresses. A prefetch unit, for example, may detect a memory access streaming pattern based upon the vector of effective addresses, and in response to detecting the memory access streaming pattern, the prefetch unit may calculate one or more prefetch memory addresses based upon the memory access streaming pattern. Lastly, the prefetch unit may prefetch the one or more prefetch memory addresses into a memory.
    Type: Application
    Filed: May 22, 2012
    Publication date: November 28, 2013
    Inventor: Jeffry E. Gonion
  • Patent number: 8583894
    Abstract: A hybrid prefetch method and apparatus is disclosed. A processor includes a hybrid prefetch unit configured to generate addresses for accessing data from a system memory. The hybrid prefetch unit includes a first prediction unit configured to generate a first memory address according to a first prefetch algorithm and a second prediction unit configured to generate a second memory address according to a second prefetch algorithm. The hybrid prefetcher further includes an arbitration unit configured to select one of the first and second memory addresses and further configured to provide the selected one of the first and second memory addresses during a prefetch operation.
    Type: Grant
    Filed: September 9, 2010
    Date of Patent: November 12, 2013
    Assignee: Advanced Micro Devices
    Inventors: Swamy Punyamurtula, Bharath Narasimha Swamy
  • Patent number: 8560778
    Abstract: A method for enabling cache read optimization for mobile memory devices is described. The method includes receiving one or more access commands, at a memory device from a host, the one or more access commands instructing the memory device to access at least two data blocks. The at least two data blocks are accessed. The method includes generating, by the memory device, pre-fetch information for the at least two data blocks based at least in part on an order of accessing the at least two data blocks. Apparatus and computer readable media are also described.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: October 15, 2013
    Assignee: Memory Technologies LLC
    Inventors: Matti Floman, Kimmo Mylly
  • Publication number: 20130185516
    Abstract: Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines.
    Type: Application
    Filed: January 16, 2012
    Publication date: July 18, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Peter G. Sassone, Suman Mamidi, Elizabeth Abraham, Suresh K. Venkumahanti, Lucian Codrescu
  • Patent number: 8484421
    Abstract: Embodiments of the present disclosure provide a system on a chip (SOC) comprising a processing core, and a cache including a cache instruction port, a cache data port, and a port utilization circuitry configured to selectively fetch instructions through the cache instruction port and selectively pre-fetch instructions through the cache data port. Other embodiments are also described and claimed.
    Type: Grant
    Filed: November 23, 2009
    Date of Patent: July 9, 2013
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Tarek Rohana, Adi Habusha, Gil Stoler
  • Patent number: 8473689
    Abstract: A system for prefetching memory in caching systems includes a processor that generates requests for data. A cache of a first level stores memory lines retrieved from a lower level memory in response to references to addresses generated by the processor's requests for data. A prefetch buffer is used to prefetch an adjacent memory line from the lower level memory in response to a request for data. The adjacent memory line is a memory line that is adjacent to a first memory line that is associated with an address of the request for data. An indication that a memory line associated with an address associated with the requested data has been prefetched is stored. A prefetched memory line is transferred to the cache of the first level in response to the stored indication that a memory line associated with an address associated with the requested data has been prefetched.
    Type: Grant
    Filed: July 27, 2010
    Date of Patent: June 25, 2013
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy D. Anderson, Kai Chirca
  • Publication number: 20130151787
    Abstract: Provided is a method and system for preloading a cache on a graphical processing unit. The method includes receiving a command message, the command message including data related to a portion of memory. The method also includes interpreting the command message, identifying policy information of the cache, identifying a location and size of the portion of memory, and creating a fetch message including data related to contents of the portion, wherein the fetch message causes the cache to preload data of the portion of memory.
    Type: Application
    Filed: December 13, 2011
    Publication date: June 13, 2013
    Applicant: ATI Technologies, ULC
    Inventors: Guennadi RIGUER, Yury Lichmanov
  • Publication number: 20130145102
    Abstract: One embodiment of the present invention sets forth an improved way to prefetch instructions in a multi-level cache. Fetch unit initiates a prefetch operation to transfer one of a set of multiple cache lines, based on a function of a pseudorandom number generator and the sector corresponding to the current instruction L1 cache line. The fetch unit selects a prefetch target from the set of multiple cache lines according to some probability function. If the current instruction L1 cache 370 is located within the first sector of the corresponding L1.5 cache line, then the selected prefetch target is located at a sector within the next L1.5 cache line. The result is that the instruction L1 cache hit rate is improved and instruction fetch latency is reduced, even where the processor consumes instructions in the instruction L1 cache at a fast rate.
    Type: Application
    Filed: December 6, 2011
    Publication date: June 6, 2013
    Inventors: Nicholas Wang, Jack Hilaire Choquette
  • Publication number: 20130111147
    Abstract: Example methods, apparatus, and articles of manufacture to access memory are disclosed. A disclosed example method involves receiving at least one runtime characteristic associated with accesses to contents of a memory page and dynamically adjusting a memory fetch width for accessing the memory page based on the at least one runtime characteristic.
    Type: Application
    Filed: October 31, 2011
    Publication date: May 2, 2013
    Inventors: Jeffrey Clifford Mogul, Naveen Muralimanohar, Mehul A. Shah, Eric A. Anderson
  • Patent number: 8429351
    Abstract: Described are techniques for processing a data operation in a data storage system. A front-end component receives the data operation to read a data portion. In response to receiving the data operation, the front-end component performs first processing. The first processing includes determining whether the data operation is a read operation resulting in a cache hit to a prefetched data portion of a sequential stream, and if said determining determines that said data operation results in a cache hit to a prefetched data portion, performing processing in connection with prefetching additional data for said sequential stream. The processing includes determining whether to prefetch additional data for said sequential stream and, if so, an amount of additional data to prefetch. The processing uses one or more criteria to determine one or more of an amount of data to prefetch in a single prefetch request and a track ahead parameter.
    Type: Grant
    Filed: March 28, 2008
    Date of Patent: April 23, 2013
    Assignee: EMC Corporation
    Inventors: Rong Yu, Orit Levin-Michael, Roderick M. Klinger, Yechiel Yochai, John W. Lefferts
  • Patent number: 8407423
    Abstract: Read-ahead of data blocks in a storage system is performed based on a policy. The policy is stochastically selected from a plurality of policies in respect to probabilities. The probabilities are calculated based on past performances, also referred to as rewards. Policies which induce better performance may be given precedence over other policies. However, the other policies may be also utilized to reevaluate them. A balance between exploration of different policies and exploitation of previously discovered good policies may be achieved.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: March 26, 2013
    Assignee: International Business Machines Corporation
    Inventors: Dan Pelleg, Eran Raichstein, Amir Ronen
  • Publication number: 20130019065
    Abstract: A method for enabling cache read optimization for mobile memory devices is described. The method includes receiving one or more access commands, at a memory device from a host, the one or more access commands instructing the memory device to access at least two data blocks. The at least two data blocks are accessed. The method includes generating, by the memory device, pre-fetch information for the at least two data blocks based at least in part on an order of accessing the at least two data blocks. Apparatus and computer readable media are also described.
    Type: Application
    Filed: July 11, 2011
    Publication date: January 17, 2013
    Inventors: Matti Floman, Kimmo Mylly
  • Patent number: 8332570
    Abstract: A computer-implemented method for defragmenting virtual machine prefetch data. The method may include obtaining prefetch information associated with prefetch data of a virtual machine. The method may also include defragmenting, based on the prefetch information, the prefetch data on physical storage. The prefetch information may include a starting location and length of the prefetch data on a virtual disk. The prefetch information may include a geometry specification of the virtual disk. Defragmenting on physical storage may include placing the prefetch data contiguously on physical storage, placing the prefetch data in a fast-access segment of physical storage, and/or ordering the prefetch data according to the order in which it is accessed at system or application startup.
    Type: Grant
    Filed: September 30, 2008
    Date of Patent: December 11, 2012
    Assignee: Symantec Corporation
    Inventors: Randall R. Cook, Brian Hernacki, Sourabh Satish, William E. Sobel
  • Publication number: 20120311270
    Abstract: A method and apparatus for prefetching data from memory for a multicore data processor. A prefetcher issues a plurality of requests to prefetch data from a memory device to a memory cache. Consecutive cache misses are recorded in response to at least two of the plurality of requests. A time between the cache misses is determined and a timing of a further request to prefetch data from the memory device to the memory cache is altered as a function of the determined time between the two cache misses.
    Type: Application
    Filed: May 31, 2011
    Publication date: December 6, 2012
    Applicant: Illinois Institute of Technology
    Inventors: Xian-He Sun, Yong Chen, Huaiyu Zhu
  • Publication number: 20120297142
    Abstract: Described is a system and computer program product for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment.
    Type: Application
    Filed: May 20, 2011
    Publication date: November 22, 2012
    Inventors: Binny S. Gill, Haim Helman, Edi Shmueli
  • Patent number: 8307156
    Abstract: A rotating media storage device (RMSD) that adaptively modifies pre-read operations is disclosed. The RMSD schedules a pre-read data segment on a second track of disk, commands a movable head to seek to the second track, and if an on-track condition is not met for the scheduled pre-read data segment, modifies the pre-read operation. In one example, modifying the pre-read operation includes canceling the pre-read operation and then performing a read data operation.
    Type: Grant
    Filed: November 22, 2010
    Date of Patent: November 6, 2012
    Assignee: Western Digital Technologies, Inc.
    Inventors: Raffi Codilian, Gregory B. Thelin
  • Patent number: 8307164
    Abstract: Read-ahead of data blocks in a storage system is performed based on a policy. The policy is stochastically selected from a plurality of policies in respect to probabilities. The probabilities are calculated based on past performances, also referred to as rewards. Policies which induce better performance may be given precedence over other policies. However, the other policies may be also utilized to reevaluate them. A balance between exploration of different policies and exploitation of previously discovered good policies may be achieved.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: November 6, 2012
    Assignee: International Business Machines Corporation
    Inventors: Dan Pelleg, Eran Raichstein, Amir Ronen
  • Publication number: 20120265962
    Abstract: A method for data storage includes, in a storage device that communicates with a host over a storage interface for executing a storage command in a memory of the storage device, estimating an expected data under-run between fetching data for the storage command from the memory and sending the data over the storage interface. A data size to be prefetched from the memory, in order to complete uninterrupted execution of the storage command, is calculated in the storage device based on the estimated data under-run. The storage command is executed in the memory while prefetching from the memory data of at least the calculated data size.
    Type: Application
    Filed: April 5, 2012
    Publication date: October 18, 2012
    Applicant: ANOBIT TECHNOLOGIES LTD.
    Inventor: Arie Peled
  • Patent number: 8291172
    Abstract: A microprocessor includes first and second cache memories occupying distinct hierarchy levels, the second backing the first. A prefetcher monitors load operations and maintains a recent history of the load operations from a cache line and determines whether the recent history indicates a clear direction. The prefetcher prefetches one or more cache lines into the first cache memory when the recent history indicates a clear direction and otherwise prefetches the one or more cache lines into the second cache memory. The prefetcher also determines whether the recent history indicates the load operations are large and, other things being equal, prefetches a greater number of cache lines when large than small. The prefetcher also determines whether the recent history indicates the load operations are received on consecutive clock cycles and, other things being equal, prefetches a greater number of cache lines when on consecutive clock cycles than not.
    Type: Grant
    Filed: August 26, 2010
    Date of Patent: October 16, 2012
    Assignee: VIA Technologies, Inc.
    Inventors: Rodney E. Hooker, Colin Eddy
  • Patent number: 8255634
    Abstract: Apparatus and methods for improved efficiency in accessing meta-data in a storage controller of a virtualized storage system. Features and aspects hereof walk/retrieve meta-data for one or more other I/O requests when retrieving meta-data for a first I/O request. The meta-data may include mapping information for mapping logical addresses of the virtual volume. Meta-data may also include meta-data associated with higher level, enhanced data services provide by or in conjunction with the storage system. Enhanced data services may include features for synchronous mirroring of a volume and/or management of time-based snapshots of the content of a virtual volume.
    Type: Grant
    Filed: August 11, 2010
    Date of Patent: August 28, 2012
    Assignee: LSI Corporation
    Inventor: Howard Young
  • Patent number: 8255631
    Abstract: A method, processor, and data processing system for implementing a framework for priority-based scheduling and throttling of prefetching operations. A prefetch engine (PE) assigns a priority to a first prefetch stream, indicating a relative priority for scheduling prefetch operations of the first prefetch stream. The PE monitors activity within the data processing system and dynamically updates the priority of the first prefetch stream based on the activity (or lack thereof). Low priority streams may be discarded. The PE also schedules prefetching in a priority-based scheduling sequence that corresponds to the priority currently assigned to the scheduled active streams. When there are no prefetches within a prefetch queue, the PE triggers the active streams to provide prefetches for issuing. The PE determines when to throttle prefetching, based on the current usage level of resources relevant to completing the prefetch.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: August 28, 2012
    Assignee: International Business Machines Corporation
    Inventors: Lei Chen, Lixin Zhang
  • Publication number: 20120216008
    Abstract: A method for migrating extents between extent pools in a tiered storage architecture maintains a data access profile for an extent over a period of time. Using the data access profile, the method generates an extent profile graph that predicts data access rates for the extent into the future. The slope of the extent profile graph is calculated and used to determine whether the extent will reach a migration threshold within a specified “look-ahead” time. If so, the method calculates a migration window that allows the extent to be migrated prior to reaching the migration threshold. In certain embodiments, the method determines the overall performance impact on the source extent pool and destination extent pool during the migration window. If the overall performance impact is below a designated impact threshold, the method migrates the extent during the migration window.
    Type: Application
    Filed: April 24, 2012
    Publication date: August 23, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul A. Jennas, Larry Juarez, David Montgomery, Todd C. Sorenson
  • Patent number: 8250307
    Abstract: According to a method of data processing, a memory controller receives a prefetch load request from a processor core of a data processing system. The prefetch load request specifies a requested line of data. In response to receipt of the prefetch load request, the memory controller determines by reference to a stream of demand requests how much data is to be supplied to the processor core in response to the prefetch load request. In response to the memory controller determining to provide less than all of the requested line of data, the memory controller provides less than all of the requested line of data to the processor core.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Gheorghe C. Cascaval, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8209488
    Abstract: A technique for data prefetching using indirect addressing includes monitoring data pointer values, associated with an array, in an access stream to a memory. The technique determines whether a pattern exists in the data pointer values. A prefetch table is then populated with respective entries that correspond to respective array address/data pointer pairs based on a predicted pattern in the data pointer values. Respective data blocks (e.g., respective cache lines) are then prefetched (e.g., from the memory or another memory) based on the respective entries in the prefetch table.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: June 26, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Publication number: 20120144125
    Abstract: A prefetch data machine instruction having an M field performs a function on a cache line of data specifying an address of an operand. The operation comprises either prefetching a cache line of data from memory to a cache or reducing the access ownership of store and fetch or fetch only of the cache line in the cache or a combination thereof. The address of the operand is either based on a register value or the program counter value pointing to the prefetch data machine instruction.
    Type: Application
    Filed: January 6, 2012
    Publication date: June 7, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Timothy J. Slegel
  • Publication number: 20120131269
    Abstract: An adaptive memory system is provided for improving the performance of an external computing device. The adaptive memory system includes a single controller, a first memory type (e.g., Static Random Access Memory or SRAM), a second memory type (e.g., Dynamic Random Access Memory or DRAM), a third memory type (e.g., Flash), an internal bus system, and an external bus interface. The single controller is configured to: (i) communicate with all three memory types using the internal bus system; (ii) communicate with the external computing device using the external bus interface; and (iii) allocate cache-data storage assignment to a storage space within the first memory type, and after the storage space within the first memory type is determined to be full, allocate cache-data storage assignment to a storage space within the second memory type.
    Type: Application
    Filed: January 31, 2012
    Publication date: May 24, 2012
    Applicant: MOBILE SEMICONDUCTOR CORPORATION
    Inventors: Louis Cameron Fisher, Stephen V.R. Hellriegel, Mohammad S. Ahmadnia
  • Patent number: 8180951
    Abstract: A memory system for transmitting data to and receiving data from a host apparatus includes a semiconductor memory and an access-controlling part. The semiconductor memory has storage areas identified by physical addresses, stores data in each of the storage areas, performs data write in accordance with a request made by the host apparatus. The access-controlling part selects a recommended address, which is recommended to be used in a next data write, on the basis of operation information about a factor that influences time consumed for data write in the semiconductor memory, and outputs the recommended address to the host apparatus.
    Type: Grant
    Filed: March 16, 2007
    Date of Patent: May 15, 2012
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Takashi Oshima
  • Patent number: 8166277
    Abstract: A technique for performing indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content of a memory at the first memory address is then fetched. A second memory address is determined from the content of the memory at the first memory address. Finally, a data block (e.g., a cache line) including data at the second memory address is fetched (e.g., from the memory or another memory).
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8161264
    Abstract: A technique for performing data prefetching using indirect addressing includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block (e.g., a first cache line) of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block (e.g., a second cache line) that includes data at the second memory address is then fetched (e.g., from the memory or another memory). A data prefetch instruction may be indicated by a unique operational code (opcode), a unique extended opcode, or a field (including one or more bits) in an instruction.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8161265
    Abstract: A technique for performing data prefetching using multi-level indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content that is included in a first data block (e.g., a first cache line of a memory) at the first memory address is then fetched. A second memory address is then determined based on the content at the first memory address. Content that is included in a second data block (e.g., a second cache line) at the second memory address is then fetched (e.g., from the memory or another memory). A third memory address is then determined based on the content at the second memory address. Finally, a third data block (e.g., a third cache line) that includes another pointer or data at the third memory address is fetched (e.g., from the memory or the another memory).
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8161263
    Abstract: A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8156286
    Abstract: A microprocessor includes a cache memory, a prefetch unit, and detection logic. The prefetch unit may be configured to monitor memory accesses that miss in the cache and to determine whether to prefetch one or more blocks of memory from a system memory based upon previous memory accesses. The prefetch unit may be further configured to use addresses of the memory accesses that miss to calculate each next memory block to prefetch. The detection logic may be configured to provide a notification to the prefetch unit in response to detecting a memory access instruction including a particular hint. In response to receiving the notification, the prefetch unit may be configured to inhibit using an address associated with the memory access instruction including the particular hint, when calculating subsequent memory blocks to prefetch.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: April 10, 2012
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Thomas M. Deneau