Look-ahead Patents (Class 711/137)
  • Patent number: 10108339
    Abstract: An operating system of a computational device manages access of a plurality of applications to a solid state drive. Separate bands are maintained in the solid state drive for storing writes of at least two different applications of the plurality of applications. Additionally, in other embodiments, a virtual machine manager of a computational device manages access of a plurality of virtual machines to a solid state drive. Separate bands are maintained in the solid state drive for storing writes of at least two different virtual machines of the plurality of virtual machines.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: October 23, 2018
    Assignee: INTEL CORPORATION
    Inventors: Gavin F. Paes, Sanjeev N. Trika
  • Patent number: 10078514
    Abstract: A technique for operating a processor includes allocating an entry in a prefetch filter queue (PFQ) for a cache line address (CLA) in response to the CLA missing in an upper level instruction cache. In response to the CLA subsequently hitting in the upper level instruction cache, an associated prefetch value for the entry in the PFQ is updated. In response to the entry being aged-out of the PFQ, an entry in a backing array for the CLA and the associated prefetch value is allocated. In response to subsequently determining that prefetching is required for the CLA, the backing array is accessed to determine the associated prefetch value for the CLA. A cache line at the CLA and a number of sequential cache lines specified by the associated prefetch value in the backing array are then prefetched into the upper level instruction cache.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: September 18, 2018
    Assignee: International Business Machines Corporation
    Inventors: Richard J. Eickemeyer, Sheldon B. Levenstein, David S. Levitan, Mauricio J. Serrano, Brian W. Thompto
  • Patent number: 10067792
    Abstract: Technologies are generally described for systems, devices and methods effective to select program instructions for a hardware finite automaton on a multi-core processor that includes two or more cores. A hardware finite automata manager may identify executable instructions associated with a particular one of the cores of the multi-core processor. The hardware finite automata manager may determine that the hardware finite automaton is available to be used. The hardware finite automata manager, in response to the determination that the hardware finite automaton is available, may select at least one program instruction based on the executable instructions. The at least one program instruction may be configured to modify the hardware finite automaton to pre-fetch data. The hardware finite automaton manager may transmit the at least one program instruction to the hardware finite automaton.
    Type: Grant
    Filed: February 12, 2014
    Date of Patent: September 4, 2018
    Assignee: Empire Technology Development LLC
    Inventor: Ezekiel Kruglick
  • Patent number: 10067896
    Abstract: Methods and hardware data structures are provided for tracking ordered transactions in a multi-transactional hardware design comprising one or more slaves configured to receive transaction requests from a plurality of masters. The data structure includes one or more counters for keeping track of the number of in-flight transactions; a table that keeps track of the age of each of the in-flight transactions for each master using the one or more counters; and control logic that verifies that a transaction response for an in-flight transaction for a particular master has been issued by the slave in a predetermined order based on the tracked age for the in-flight transaction in the table.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: September 4, 2018
    Assignee: Imagination Technologies Limited
    Inventor: Ashish Darbari
  • Patent number: 10069698
    Abstract: A fault-tolerant monitoring apparatus is arranged to monitor physical performance properties of a plurality of networked computing elements, each element including a processing unit and individual memory. The monitoring apparatus comprises a plurality of measurer apparatuses, each arranged to measure the physical performance properties of a single computing element, the physical performance properties being stored as local information in the individual memory of the computing element in which the measurement is made; and one or more collector apparatuses arranged to control collection of remote information representing physical performance properties from individual memory in a plurality of the computing elements; and storage of the remote physical performance information as replicate information in the individual memory of another computing element; wherein the remote physical performance information is collected using third party access.
    Type: Grant
    Filed: April 16, 2014
    Date of Patent: September 4, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Michael Li
  • Patent number: 10061703
    Abstract: Prevention of a prefetch memory operation from causing a transaction to abort. A local processor receives a prefetch request from a remote processor. A processor determines whether the prefetch request conflicts with a transaction of the local processor. A processor responds to at least one of i) a determination that the local processor has no transaction, and ii) a determination that the prefetch request does not conflict with a transaction by providing a requested prefetch data by providing a requested prefetch data. A processor responds to a determination that the prefetch request conflicts with a transaction by suppressing a processing of the prefetch request.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: August 28, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 10042749
    Abstract: Prevention of a prefetch memory operation from causing a transaction to abort. A local processor receives a prefetch request from a remote processor. A processor determines whether the prefetch request conflicts with a transaction of the local processor. A processor responds to at least one of i) a determination that the local processor has no transaction, and ii) a determination that the prefetch request does not conflict with a transaction by providing a requested prefetch data by providing a requested prefetch data. A processor responds to a determination that the prefetch request conflicts with a transaction by suppressing a processing of the prefetch request.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: August 7, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 10031785
    Abstract: For predictive computing resource allocation in a distributed environment, a model module generating a model of computing resource usage in a distributed computer system having a plurality of geographically distributed nodes organized into a plurality of clusters, a demand module predicting future demand for computing resources, a cost module calculating an operation cost for each computing resource, an available resource module identifying a set of available computing resources in the computer system, a resource set module that determines a minimum cost set of computer resources capable of meeting the predicted demand based on the set of available computing resources and on operating costs, and an activation module that determines whether to activate or deactivate each of the plurality of nodes based on the set of computer resources capable of meeting the predicted demand.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: July 24, 2018
    Assignee: International Business Machines Corporation
    Inventors: Emmanuel B. Gonzalez, Shaun E. Harrington, Harry McGregor, Christopher B. Moore
  • Patent number: 10031850
    Abstract: A data storage device includes a controller, a non-volatile memory, and a buffer accessible to the controller. The buffer is configured to store data retrieved from the non-volatile memory to be accessible to a host device in response to receiving from the host device one or more requests for read access to the non-volatile memory while the data storage device is operatively coupled to the host device. The controller is configured to read an indicator of cached data in response to receiving a request for read access to the non-volatile memory. The request includes a data identifier. In response to the indicator of cached data not indicating that data corresponding to the data identifier is in the buffer, the controller is configured to retrieve data corresponding to the data identifier as well as additional data from the non-volatile memory and to write the data corresponding to the data identifier and the additional data to the buffer.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: July 24, 2018
    Assignee: Sandisk Technologies LLC
    Inventor: Reuven Elhamias
  • Patent number: 10019359
    Abstract: Described are techniques for processing I/O operations. A read operation is received to read first data from a first location. It is determined whether the read operation is a read miss and whether non-location metadata for the first location is stored in cache. Responsive to determining that the read operation is a read miss and that the non-location metadata for the first location is not stored in cache, first processing is performed that includes issuing concurrently a first read request to read the first data from physical storage and a second read request to read the non-location metadata for the first location from physical storage.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: July 10, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Andrew Chanler, Michael Scharland, Gabriel BenHanokh, Arieh Don
  • Patent number: 10013356
    Abstract: The disclosed embodiments relate to a system that generates prefetches for a stream of data accesses with multiple strides. During operation, while a processor is generating the stream of data accesses, the system examines a sequence of strides associated with the stream of data accesses. Next, upon detecting a pattern having a single constant stride in the examined sequence of strides, the system issues prefetch instructions to prefetch a sequence of data cache lines consistent with the single constant stride. Similarly, upon detecting a recurring pattern having two or more different strides in the examined sequence of strides, the system issues prefetch instructions to prefetch a sequence of data cache lines consistent with the recurring pattern having two or more different strides.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: July 3, 2018
    Assignee: ORACLE INTERNAIONAL CORPORATION
    Inventor: Yuan C. Chou
  • Patent number: 10007442
    Abstract: Methods, systems, and computer readable media for automatically deriving hints from storage device accesses and from file system metadata and for utilizing the hints to optimize utilization of the memory storage device are provided. One method includes analyzing an input/output operation involving non-volatile memory or file system metadata. The method further includes automatically deriving, based on results from the analyzing, a hint regarding an expected access pattern to the non-volatile memory. The method further includes using the hint to optimize utilization of the non-volatile memory.
    Type: Grant
    Filed: August 20, 2014
    Date of Patent: June 26, 2018
    Assignee: SanDisk Technologies LLC
    Inventors: Judah Gamliel Hahn, Joseph Robert Meza, Daniel Edward Tuers
  • Patent number: 10007608
    Abstract: A method to store objects in a memory cache is disclosed. A request is received from an application to store an object in a memory cache associated with the application. The object is stored in a cache region of the memory cache based on an identification that the object has no potential for storage in a shared memory cache and a determination that the cache region is associated with a storage policy that specifies that objects to be stored in the cache region are to be stored in a local memory cache and that a garbage collector is not to remove objects stored in the cache region from the local memory cache.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: June 26, 2018
    Assignee: SAP SE
    Inventors: Galin Galchev, Frank Kilian, Oliver Luik, Dirk Marwinski, Petio Petev
  • Patent number: 10007616
    Abstract: In an embodiment, an apparatus includes a cache memory and a control circuit. The control circuit may be configured to pre-fetch and store a first quantity of instruction data in response to a determination that a first pre-fetch operation request is received after a reset and prior to a first end condition. The first end condition may depend on an amount of unused storage in the cache memory. The control circuit may be further configured to pre-fetch and store a second quantity of instruction data in response to a determination that a second pre-fetch operation request is received after the first end condition. The second quantity may be less than the first quantity.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: June 26, 2018
    Assignee: Apple Inc.
    Inventors: Brett S. Feero, David J. Williamson, Jonathan J. Tyler, Mary D. Brown
  • Patent number: 9996350
    Abstract: Methods and apparatuses relating to a prefetch instruction to prefetch a multidimensional block of elements from a multidimensional array into a cache. In one embodiment, a hardware processor includes a decoder to decode a prefetch instruction to prefetch a multidimensional block of elements from a multidimensional array into a cache, wherein at least one operand of the prefetch instruction is to indicate a system memory address of an element of the multidimensional block of elements, a stride of the multidimensional block of elements, and boundaries of the multidimensional block of elements, and an execution unit to execute the prefetch instruction to generate system memory addresses of the other elements of the multidimensional block of elements, and load the multidimensional block of elements into the cache from the system memory addresses.
    Type: Grant
    Filed: December 27, 2014
    Date of Patent: June 12, 2018
    Assignee: INTEL CORPORATION
    Inventors: Victor Lee, Mikhail Smelyanskiy, Alexander Heinecke
  • Patent number: 9977744
    Abstract: A memory system includes a memory device of lower read operation speed; a memory cache of higher read operation speed; and a controller suitable for: setting one of access patterns to the memory device defined by pairs of former and latter addresses provided to the memory system within a set input time interval as a prefetch pattern; performing a prefetch operation of caching data corresponding to the latter address from the memory device to the memory cache according to the prefetch pattern; and reading the cached data from the memory cache in response to a read command provided with the latter address of the prefetch pattern.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: May 22, 2018
    Assignee: SK Hynix Inc.
    Inventor: Hae-Gi Choi
  • Patent number: 9971694
    Abstract: In an embodiment, a processor may implement an access map-pattern match (AMPM)-based prefetch circuit with features designed to improve prefetching accuracy and/or reduce power consumption. In an embodiment, the prefetch circuit may be configured to detect that pointer reads are occurring (e.g. “pointer chasing.”) The prefetch circuit may be configured to increase the frequency at which prefetch requests are generated for an access map in which pointer read activity is detected, compared to the frequency at which the prefetch requests would be generated in the pointer read activity is not generated. In an embodiment, the prefetch circuit may also detect access maps that are store-only, and may reduce the frequency of prefetches for store only access maps as compared to the frequency of load-only or load/store maps.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: May 15, 2018
    Assignee: Apple Inc.
    Inventors: Stephan G. Meier, Mridul Agarwal
  • Patent number: 9971713
    Abstract: A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: May 15, 2018
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Sameh Asaad, Ralph E. Bellofatto, Michael A. Blocksome, Matthias A. Blumrich, Peter Boyle, Jose R. Brunheroto, Dong Chen, Chen-Yong Cher, George L. Chiu, Norman Christ, Paul W. Coteus, Kristan D. Davis, Gabor J. Dozsa, Alexandre E. Eichenberger, Noel A. Eisley, Matthew R. Ellavsky, Kahn C. Evans, Bruce M. Fleischer, Thomas W. Fox, Alan Gara, Mark E. Giampapa, Thomas M. Gooding, Michael K. Gschwind, John A. Gunnels, Shawn A. Hall, Rudolf A. Haring, Philip Heidelberger, Todd A. Inglett, Brant L. Knudson, Gerard V. Kopcsay, Sameer Kumar, Amith R. Mamidala, James A. Marcella, Mark G. Megerian, Douglas R. Miller, Samuel J. Miller, Adam J. Muff, Michael B. Mundy, John K. O'Brien, Kathryn M. O'Brien, Martin Ohmacht, Jeffrey J. Parker, Ruth J. Poole, Joseph D. Ratterman, Valentina Salapura, David L. Satterfield, Robert M. Senger, Burkhard Steinmacher-Burow, William M. Stockdell, Craig B. Stunkel, Krishnan Sugavanam, Yutaka Sugawara, Todd E. Takken, Barry M. Trager, James L. Van Oosten, Charles D. Wait, Robert E. Walkup, Alfred T. Watson, Robert W. Wisniewski, Peng Wu
  • Patent number: 9972379
    Abstract: First and second read requests are received. First data is fetched in response to the first read request. The fetched first data is then stored. The fetched first data corresponds to an address of the first read request. The fetched first data is returned in response to the second read request.
    Type: Grant
    Filed: July 26, 2013
    Date of Patent: May 15, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Gregg B. Lesartre
  • Patent number: 9973559
    Abstract: Systems and methods for presenting content streams to a client device are provided. In some aspects, a method includes providing an indicator of a plurality of content streams to a client device. Each of the plurality of content streams is associated with a variant feature of content to be delivered to the client device. The method includes monitoring one or more requests, from the client device, for at least one of the plurality of content streams based on the variant feature of the content associated with each of the requested plurality of content streams. The method also includes modifying the indicator of the plurality of content streams based on the monitored one or more requests. The method also includes providing the modified indicator of the plurality of content streams to the client device.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: May 15, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventor: Brian Allan Heng
  • Patent number: 9959210
    Abstract: In various embodiments, a storage device includes a magnetic media, a cache memory, and a drive controller. In embodiments, the drive controller is configured to establish a portion of the cache memory as an archival zone having a cache policy to maximize write hits. The drive controller is further configured to pre-erase the archival zone, direct writes from a host to the archival zone, and flush writes from the archival zone to the magnetic media. In embodiments, the drive controller is configured to establish a portion of the cache memory as a retrieval zone having a cache policy to maximize read hits. The drive controller is further configured to pre-fetch data from the magnetic media to the retrieval zone, transfer data from the retrieval zone to a host upon request by the host, and transfer read ahead data to the retrieval zone to replace data transferred to the host.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: May 1, 2018
    Assignee: DELL PRODUCTS, LP
    Inventors: Munif F. Farhan, William F. Sauber, Dina A. Eldin
  • Patent number: 9946654
    Abstract: A method for prefetching data into a cache is provided. The method allocates an outstanding request buffer (“ORB”). The method stores in an address field of the ORB an address and a number of blocks. The method issues prefetch requests for a degree number of blocks starting at the address. When a prefetch response is received for all the prefetch requests, the method adjusts the address of the next block to prefetch and adjusts the number of blocks remaining to be retrieved and then issues prefetch requests for a degree number of blocks starting at the adjusted address. The prefetching pauses when a maximum distance between the reads of the prefetched blocks and the last prefetched block is reached. When a read request for a prefetched block is received, the method resumes prefetching when a resume criterion is satisfied.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: April 17, 2018
    Assignee: Cray Inc.
    Inventors: Sanyam Mehta, James Robert Kohn, Daniel Jonathan Ernst, Heidi Lynn Poxon, Luiz DeRose
  • Patent number: 9934825
    Abstract: According to one embodiment, a semiconductor device includes, for example, a circuit board, a plurality of elements, a plurality of controllers, and a first signal line. The elements are provided on the circuit board. The elements each include a memory. The controllers each are configured to control read of data from the memory. The controllers each are configured to control write of data into the memory. A control signal is transmitted through the first signal line. The first signal line is used in common by the controllers.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: April 3, 2018
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventors: Manabu Matsumoto, Katsuya Murakami, Koichi Nagai
  • Patent number: 9934148
    Abstract: A memory module stores memory access metadata reflecting information about memory accesses to the memory module. The memory access metadata can indicate the number of times a particular unit of data (e.g., a row of data, a unit of data corresponding to a cache line, and the like) has been read, written, had one or more of its bits flipped, and the like. Modifications to the embedded access metadata can be made by a control module at the memory module itself, thereby reducing overhead at a processor core. In addition, the control module can be configured to record different access metadata for different memory locations of the memory module.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: April 3, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Roberts, Sergey Blagodurov
  • Patent number: 9921972
    Abstract: An apparatus and method for implementing a heterogeneous memory subsystem is described. For example, one embodiment of a processor comprises: memory mapping logic to subdivide a system memory space into a plurality of memory chunks and to map the memory chunks across a first memory and a second memory, the first memory having a first set of memory access characteristics and the second memory having a second set of memory access characteristics different from the first set of memory access characteristics; and dynamic remapping logic to swap memory chunks between the first and second memories based, at least in part, on a detected frequency with which the memory chunks are accessed.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: March 20, 2018
    Assignee: Intel Corporation
    Inventors: Christopher B. Wilkerson, Alaa R. Alameldeen, Zeshan A. Chishti, Jaewoong Sim
  • Patent number: 9916259
    Abstract: A system and method for low latency and higher bandwidth communication between a central processing unit (CPU) and an accelerator is disclosed. When the CPU updates a copy of data stored at a shared memory, the CPU also sends an “invalidate” command to a cache coherent interconnect (CCI). The CCI forwards the invalidate command to a dedicated cache register (DCR). The DCR marks its copy of the data as “out-of-date” and requests an up-to-date copy of the data from the CCI. The CCI then retrieves up-to-date data for the DCR. When the DCR receives the up-to-date data from the CCI, the DCR replaces the out-of-date data with the up-to-date data, and marks the up-to-date data with the status of “valid.” The DCR can then provide data to an accelerator with a status of “out-of-date” or “valid.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: March 13, 2018
    Assignee: Waymo LLC
    Inventors: Grace Nordin, Daniel Rosenband
  • Patent number: 9904624
    Abstract: In an embodiment, a system may include multiple processors and a cache coupled to the processors. Each processor includes a data cache and a prefetch circuit that may be configured to generate prefetch requests. Each processor may also generate memory operations responsive to cache misses in the data cache. Each processor may transmit the prefetch requests and memory operations to the cache. The cache may queue the memory operations and prefetch requests, and may be configured to detect, on a per-processor basis, occupancy in the queue of memory requests and low confidence prefetch requests from the processor. The cache may determine if the per-processor occupancies exceed one or more thresholds, and may generate a throttle control to the processors responsive to the occupancies. In an embodiment, the cache may generate the throttle control responsive to a history of the last N samples of the occupancies.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: February 27, 2018
    Assignee: Apple Inc.
    Inventors: Tyler J. Huberty, Stephan G. Meier, Khubaib Khubaib
  • Patent number: 9898296
    Abstract: Processing of an instruction fetch from an instruction cache is provided, which includes: determining whether the next instruction fetch is from a same address page as a last instruction fetch from the instruction cache; and based, at least in part, on determining that the next instruction fetch is from the same address page, suppressing for the next instruction fetch an instruction address translation table access, and comparing for an address match results of an instruction directory access for the next instruction fetch with buffered results of a most-recent, instruction address translation table access for a prior instruction fetch from the instruction cache.
    Type: Grant
    Filed: January 8, 2016
    Date of Patent: February 20, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 9891972
    Abstract: Embodiments related to managing lazy runahead operations at a microprocessor are disclosed. For example, an embodiment of a method for operating a microprocessor described herein includes identifying a primary condition that triggers an unresolved state of the microprocessor. The example method also includes identifying a forcing condition that compels resolution of the unresolved state. The example method also includes, in response to identification of the forcing condition, causing the microprocessor to enter a runahead mode.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: February 13, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Magnus Ekman, Ross Segelken, Guillermo J. Rozas, Alexander Klaiber, James van Zoeren, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Darrell D. Boggs
  • Patent number: 9886384
    Abstract: The present examples relate to prefetching, and to a cache control device for prefetching and a prefetching method using the cache control device, wherein the cache control device analyzes a memory access pattern of program code, inserts, into the program code, a prefetching command generated by encoding the analyzed access pattern, and executes the prefetching command inserted into the program code in order to prefetch data into a cache, thereby maximizing prefetching efficiency.
    Type: Grant
    Filed: November 3, 2015
    Date of Patent: February 6, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jun-Kyoung Kim, Dong-Hoon Yoo, Jeong-Wook Kim, Soo-Jung Ryu
  • Patent number: 9880940
    Abstract: A system on chip (SoC) includes a central processing unit (CPU), an intellectual property (IP) block, and a memory management unit (MMU). The CPU is configured to set a prefetch direction corresponding to a working set of data. The IP block is configured to process the working set of data. The MMU is configured to prefetch a next page table entry from a page table based on the prefetch direction during address translation between a virtual address of the working set of data and a physical address.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: January 30, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kwan Ho Kim, Seok Min Kim
  • Patent number: 9870156
    Abstract: According to one embodiment, a memory system is provided wherein an interruption generating unit generates an interruption signal for one or more commands executed by a transfer executing unit when an end number counter is greater than or equal to a first threshold. A transfer type conjecturing unit determines whether the transfer type of a first command to be executed after transmitting the interruption signal is sequential transfer or random transfer and sets the first threshold at a value different between when determining being the sequential transfer and when determining being the random transfer.
    Type: Grant
    Filed: August 6, 2014
    Date of Patent: January 16, 2018
    Assignee: Toshiba Memory Corporation
    Inventor: Yuki Nagata
  • Patent number: 9870209
    Abstract: A processor includes a resource scheduler, a dispatcher, and a memory execution unit. The memory execution unit includes logic to identify an executed, unretired store operation in a memory ordered buffer, determine that the store operation is speculative, determine whether an associated cache line in a data cache is non-speculative, and determine whether to block a write of the store operation results to the data cache based upon the determination that the store operation is speculative and a determination that the associated cache line is non-speculative.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: January 16, 2018
    Assignee: Intel Corporation
    Inventors: John H. Kelm, Demos Pavlou, Mirem Hyuseinova
  • Patent number: 9830266
    Abstract: Described are techniques for processing a data operation in a data storage system. A front-end component of the data storage system receives the data operation. In response to receiving the data operation, the front-end component performs first processing. The first processing includes determining whether the data operation is a read operation requesting to read a data portion which results in a cache miss; and if said determining determines that the data operation is a read operation resulting in a cache miss, performing read miss processing. Read miss processing includes sequential stream recognition processing performed by the front-end component to determine whether the data portion is included in a sequential stream.
    Type: Grant
    Filed: January 16, 2014
    Date of Patent: November 28, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Rong Yu, Orit Levin-Michael, John W. Lefferts, Pei-Ching Hwang, Peng Yin, Yechiel Yochai, Dan Aharoni, Qun Fan, Stephen Richard Ives
  • Patent number: 9824016
    Abstract: A device includes, a memory, and, a processor coupled to the memory, including a cache memory, and configured, to hold a memory access instruction for executing an access to the memory and a prefetch instruction for executing a prefetch to the memory, to determine whether or not data which is a subject data of the memory access instruction is held in the cache memory, and when the data is held in the cache memory and when a corresponding prefetch instruction that is a prefetch instruction corresponding to the memory access instruction is held in the processor, not to execute an execution of the corresponding prefetch instruction.
    Type: Grant
    Filed: January 11, 2016
    Date of Patent: November 21, 2017
    Assignee: FUJITSU LIMITED
    Inventor: Shigeru Kimura
  • Patent number: 9817764
    Abstract: A processor includes a first prefetcher that prefetches data in response to memory accesses and a second prefetcher that prefetches data in response to memory accesses. Each of the memory accesses has an associated memory access type (MAT) of a plurality of predetermined MATs. The processor also includes a table that holds first scores that indicate effectiveness of the first prefetcher to prefetch data with respect to the plurality of predetermined MATs and second scores that indicate effectiveness of the second prefetcher to prefetch data with respect to the plurality of predetermined MATs. The first and second prefetchers selectively defer to one another with respect to data prefetches based on their relative scores in the table and the associated MATs of the memory accesses.
    Type: Grant
    Filed: December 14, 2014
    Date of Patent: November 14, 2017
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD
    Inventors: Rodney E. Hooker, Douglas R. Reed, John Michael Greer, Colin Eddy
  • Patent number: 9817761
    Abstract: A method for optimization of host sequential reads based on volume of data includes, at a mass data storage device, pre-fetching a first volume of predicted data associated with an identified read data stream from a data store into a buffer memory different from the data store. A request for data from the read data stream is received from a host. In response, the requested data is provided to the host from the buffer memory. While providing the requested data to the host from the buffer memory, it is determined whether a threshold volume of data has been provided to the host from the data buffer memory. If so, a second volume of predicted data associated with the identified read data stream is pre-fetched from the data store and into the buffer memory. If not, additional predicted data is not pre-fetched from the data store.
    Type: Grant
    Filed: January 6, 2012
    Date of Patent: November 14, 2017
    Assignee: SanDisk Technologies LLC
    Inventors: Koren Ben-Shemesh, Yan Nosovitsky
  • Patent number: 9817466
    Abstract: A data processing apparatus has control circuitry for detecting whether a current micro-operation to be processed by a processing pipeline would give the same result as an earlier micro-operation. If so, then the current micro-operation is passed through the processing pipeline, with at least one pipeline stage passed by the current micro-operation being placed in a power saving state during a processing cycle in which the current micro-operation is at that pipeline stage. The result of the earlier micro-operation is then output as a result of said current micro-operation. This allows power consumption to be reduced by not repeating the same computation.
    Type: Grant
    Filed: March 20, 2015
    Date of Patent: November 14, 2017
    Assignee: ARM Limited
    Inventors: Isidoros Sideris, Daren Croxford, Andrew Burdass
  • Patent number: 9811469
    Abstract: Technologies are generally described for methods and systems effective to access data in a cache. In an example, a method to access data in a cache may include processing a first request for data at a first memory address related to first data in a memory. The method may further include retrieving the first data from the memory. The method may further include storing the first data in a first cache line in the cache. The method may further include processing a second request for data at a second memory address related to second data in the memory. The method may further include retrieving the second data from the memory. The method may further include selecting a second cache line in the cache to store the second data based on the storage of the first data. The method may further include storing the second data in the second cache line.
    Type: Grant
    Filed: January 29, 2014
    Date of Patent: November 7, 2017
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Sriram Vajapeyam
  • Patent number: 9811405
    Abstract: A method obtains at least part of a file from a dispersed storage network (DSN) memory, and stores it in a data object cache. When the file is changed, a determination is made about where to store the changed file portions: in the data object cache or in the DSN. The changed file portions, for example a new copy of the part of the file obtained from the DSN, are encoded utilizing an error coding dispersal storage function, and stored in either the data object cache, or in the DSN memory.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: November 7, 2017
    Assignee: International Business Machines Corporation
    Inventors: Manish Motwani, Ilya Volvovski
  • Patent number: 9798754
    Abstract: An embodiment is described in which a memory device stores a record of I/O accesses to data blocks. And each access record indicates which data block was accessed and during which time period the access occurred. A memory-efficient data structure (MEDS) may be generated and stored in a cache or storage device and the access data moved from the memory device into the MEDS. The MEDS represents blocks that were accessed during a particular time period. When a second data block is accessed, a query function is applied to the second block's identifier to return a value based on data stored in the MEDS. The return value from the query function indicates whether the second data block was accessed during the particular time period associated with the MEDS. A storage management action is performed based on whether the second data block was accessed during the particular time period.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: October 24, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Philip Shilane, Grant Wallace
  • Patent number: 9798574
    Abstract: Examples are disclosed for composing memory resources across devices. In some examples, memory resources associated with executing one or more applications by circuitry at two separate devices may be composed across the two devices. The circuitry may be capable of executing the one or more applications using a two-level memory (2LM) architecture including a near memory and a far memory. In some examples, the near memory may include near memories separately located at the two devices and a far memory located at one of the two devices. The far memory may be used to migrate one or more copies of memory content between the separately located near memories in a manner transparent to an operating system for the first device or the second device. Other examples are described and claimed.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: October 24, 2017
    Assignee: INTEL CORPORATION
    Inventors: Neven M Abou Gazala, Paul S. Diefenbaugh, Nithyananda S. Jeganathan, Eugene Gorbatov
  • Patent number: 9781203
    Abstract: An example for synchronizing data in accordance with aspects of the present disclosure includes monitoring a set of attributes at a plurality of devices on a network, selecting a group of data based on the monitored set of attributes for synchronization, assigning priority levels to each selected data and each device, prioritizing synchronization operations to be performed on the group of selected data based on the priority levels, and synchronizing the group of selected data in accordance with the prioritization of the synchronization.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: October 3, 2017
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Roque Luis Scheer, Mauricio Nunes Porto, Soma Sundaram Santhiveeran
  • Patent number: 9767057
    Abstract: Methods and hardware data structures are provided for tracking ordered transactions in a multi-transactional hardware design comprising one or more slaves configured to receive transaction requests from a plurality of masters. The data structure includes one or more counters for keeping track of the number of in-flight transactions; a table that keeps track of the age of each of the in-flight transactions for each master using the one or more counters; and control logic that verifies that a transaction response for an in-flight transaction for a particular master has been issued by the slave in a predetermined order based on the tracked age for the in-flight transaction in the table.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: September 19, 2017
    Assignee: Imagination Technologies Limited
    Inventor: Ashish Darbari
  • Patent number: 9740619
    Abstract: A method is disclosed including a client accessing a cache for a value of an object based on an object identification (ID), initiating a request to a cache loader if the cache does not include a value for the object, the cache loader performing a lookup in an object table for the object ID corresponding to the object, the cache loader retrieving a vector of execution context IDs, from an execution context table that correspond to the object IDs looked up in the object table and the cache loader performing an execution context lookup in an execution context table for every retrieved execution context ID in the vector to retrieve object IDs from an object vector.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: August 22, 2017
    Assignee: Aggregate Knowledge, Inc.
    Inventors: Gian-Paolo Musumeci, Kristopher C. Wehner
  • Patent number: 9740611
    Abstract: A method, a device, and a non-transitory computer readable medium for performing memory management in a graphics processing unit are presented. Hints about the memory usage of an application are provided to a page manager. At least one runtime memory usage pattern of the application is sent to the page manager. Data is swapped into and out of a memory by analyzing the hints and the at least one runtime memory usage pattern.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: August 22, 2017
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Yair Shachar, Einav Raizman-Kedar, Evgeny Pinchuk
  • Patent number: 9734072
    Abstract: Provided is an integrated circuit that includes a first prefetcher component communicatively coupled to a processor and a second prefetcher component communicatively coupled to the memory controller. The first prefetcher component configured for sending prefetch requests to the memory controller. The second prefetcher component configured for accessing prefetch data based on the prefetch request and storing the prefetch data in a prefetch cache of the memory controller.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: August 15, 2017
    Assignee: MACOM CONNECTIVITY SOLUTIONS, LLC
    Inventor: Kjeld Svendsen
  • Patent number: 9734092
    Abstract: Methods and systems for securing sensitive data from security risks associated with direct memory access (“DMA”) by input/output (“I/O”) devices are provided. An enhanced software cryptoprocessor system secures sensitive data using various techniques, including (1) protecting sensitive data by preventing DMA by an I/O device to the portion of the cache that stores the sensitive data, (2) protecting device data by preventing cross-device access to device data using DMA isolation, and (3) protecting the cache by preventing the pessimistic eviction of cache lines on DMA writes to main memory.
    Type: Grant
    Filed: March 19, 2015
    Date of Patent: August 15, 2017
    Assignee: Facebook, Inc.
    Inventors: Oded Horovitz, Sahil Rihan, Stephen A. Weis, Carl A. Waldspurger
  • Patent number: 9715416
    Abstract: Adaptive queued locking for control of speculative execution is disclosed. An example apparatus includes a lock to: enforce a first quota to control a number of threads allowed to concurrently speculatively execute after being placed in a queue; and in response to the first quota not having been reached, enable a first thread from the queue to speculatively execute; and an adjuster to change a first value of the first quota based on a result of the speculative execution of the first thread.
    Type: Grant
    Filed: June 3, 2015
    Date of Patent: July 25, 2017
    Assignee: Intel Corporation
    Inventors: Shou C. Chen, Andreas Kleen
  • Patent number: 9710392
    Abstract: Embodiments are described for methods and systems for mapping virtual memory pages to physical memory pages by analyzing a sequence of memory-bound accesses to the virtual memory pages, determining a degree of contiguity between the accessed virtual memory pages, and mapping sets of the accessed virtual memory pages to respective single physical memory pages. Embodiments are also described for a method for increasing locality of memory accesses to DRAM in virtual memory systems by analyzing a pattern of virtual memory accesses to identify contiguity of accessed virtual memory pages, predicting contiguity of the accessed virtual memory pages based on the pattern, and mapping the identified and predicted contiguous virtual memory pages to respective single physical memory pages.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: July 18, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Syed Ali Jafri, Yasuko Eckert, Srilatha Manne, Mithuna S Thottethodi