Look-ahead Patents (Class 711/137)
-
Patent number: 11397685Abstract: There is provided a data processing apparatus and method for storing a plurality of prediction cache entries in a prediction cache with associativity greater than one comprising a plurality of prediction cache ways, each of the plurality of prediction entries defining an association between a prediction cache lookup address and target information; and storing a plurality of stream entries, each stream entry corresponding to a sequence of prediction cache lookup addresses and comprising: a stream identifier defined by two or more sequential prediction cache lookup addresses of the sequence, and a plurality of sequential way predictions, each way prediction of the plurality of sequential way predictions defining, for a given position in the sequence of prediction cache lookup addresses, a prediction cache way to be looked up in the prediction cache to identify a prediction entry associated with the prediction cache lookup address at the given position in the sequence.Type: GrantFiled: February 24, 2021Date of Patent: July 26, 2022Assignee: Arm LimitedInventors: Yasuo Ishii, Chang Joo Lee
-
Patent number: 11379379Abstract: Described is a computing system and method for differential cache block sizing for computing systems. The method for differential cache block sizing includes determining, upon a cache miss at a cache, a number of available cache blocks given a payload length of the main memory and a cache block size for the last level cache, generating a main memory request including at least one indicator for a missed cache block and any available cache blocks, sending the main memory request to the main memory to obtain data associated with the missed cache block and each of the any available cache blocks, storing the data received for the missed cache block in the cache; and storing the data received for each of the any available cache blocks in the cache depending on a cache replacement algorithm.Type: GrantFiled: April 30, 2020Date of Patent: July 5, 2022Assignee: Marvell Asia Pte, Ltd.Inventors: Shubhendu Mukherjee, David Asher, Thomas F. Hummel
-
Patent number: 11379372Abstract: Memory prefetching in a processor comprises: identifying, in response to memory access instructions, a pattern of addresses; in response to a first memory access request corresponding to a sub-pattern of the pattern of addresses, prefetching a first address that is offset from the sub-pattern of addresses by a first lookahead value, wherein the first address is part of the pattern; measuring a memory access latency; determining, based on the memory access latency, a second lookahead value, wherein the second lookahead value is different from the first lookahead value; and in response to a second memory access request corresponding to the sub-pattern of the pattern of addresses, prefetching a second address, wherein the second address is part of the pattern, and wherein the second address is offset from the sub-pattern of addresses by the second lookahead value.Type: GrantFiled: April 30, 2020Date of Patent: July 5, 2022Assignee: Marvell Asia Pte, Ltd.Inventor: Shubhendu Sekhar Mukherjee
-
Patent number: 11372646Abstract: A computer-implemented method includes fetching a fetch-packet containing a first hyper-block from a first address of a memory. The fetch-packet contains a bitwise distance from an entry point of the first hyper-block to a predicted exit point. The method further includes executing a first branch instruction of the first hyper-block. The first branch instruction corresponds to a first exit point. The first branch instruction includes an address corresponding to an entry point of a second hyper-block. The method also includes storing, responsive to executing the first branch instruction, a bitwise distance from the entry point of the first hyper-block to the first exit point. The method further includes moving a program counter from the first exit point of the first hyper-block to the entry point of the second hyper-block.Type: GrantFiled: November 14, 2019Date of Patent: June 28, 2022Assignee: Texas Instruments IncorporatedInventors: Kai Chirca, Timothy D. Anderson, David E. Smith, Jr., Paul D. Gauvreau
-
Patent number: 11366749Abstract: A storage system has a volatile memory, a non-volatile memory, and a controller. The controller of the storage system can implement various mechanisms for improving random read performance. These mechanisms include improved read prediction cache management, using a pattern length for read prediction, and a time-based enhancement for read prediction. Each of these mechanisms can be used alone on in combination with some or all of the other mechanisms.Type: GrantFiled: February 23, 2021Date of Patent: June 21, 2022Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, Ariel Navon, Eran Sharon
-
Patent number: 11360902Abstract: A method for managing a readahead cache in a memory subsystem based on one or more active streams of read commands is described. The method includes receiving a read command that requests data from a memory component and determining whether the read command is part of an active stream of read commands based on a comparison of a set of addresses of the read command with one or more of (1) a command history table, which stores a set of command entries that each correspond to a received read command that has not been associated with an active stream, or (2) an active stream table, which stores a set of stream entries that each corresponds to active streams of read commands. The method further includes modifying a stream entry in the set of stream entries in response to determining that the read command is part of an active stream.Type: GrantFiled: November 24, 2020Date of Patent: June 14, 2022Assignee: MICRON TECHNOLOGY, INC.Inventor: David A. Palmer
-
Patent number: 11347649Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing cache data evicted from the first sub-cache and write-memory commands that are not cached in the first sub-cache, and a cache controller configured to receive two or more cache commands, determine a conflict exists between the received two or more cache commands, determine a conflict resolution between the received two or more cache commands, and sending the two or more cache commands to the first sub-cache and the second sub-cache.Type: GrantFiled: May 22, 2020Date of Patent: May 31, 2022Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
-
Patent number: 11334285Abstract: A method of optimising a service rate of a buffer in a computer system having memory stores of first and second type is described. The method selectively services the buffer by routing data to each of the memory store of the first type and the second type based on read/write capacity of the memory store of the first type.Type: GrantFiled: June 25, 2020Date of Patent: May 17, 2022Assignee: CORVIL LIMITEDInventors: Guofeng Li, Ken Jinks, Ian Dowse, Alex Caldas Peixoto, Franciszek Korta
-
Patent number: 11334485Abstract: A computer system for dynamic enforcement of store atomicity includes multiple processor cores, local cache memory for each processor core, a shared memory, a separate store buffer for each processor core for executed stores that are not yet performed and a coherence mechanism. A first processor core load on a first processor core receives a value at a first time from a first processor core store in the store buffer and prevents any other first processor core load younger than the first processor core load in program order from committing until a second time when the first processor core store is performed. Between the first time and the second time any load younger in program load than the first processor core load and having an address matched by coherence invalidation or an address matched by an eviction is squashed.Type: GrantFiled: December 16, 2019Date of Patent: May 17, 2022Assignee: ETA SCALE ABInventors: Stefanos Kaxiras, Alberto Ros
-
Patent number: 11327891Abstract: Provided is a method of adjusting prefetching operations, the method including setting a prefetching distance, accessing a prefetching-trigger key, determining a target key is outside of the prefetching distance from the prefetching-trigger key, increasing the prefetching distance, and successfully fetching a subsequent target key of a subsequent prefetching-trigger key from a prefetching read-ahead buffer.Type: GrantFiled: May 29, 2020Date of Patent: May 10, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Heekwon Park, Ho bin Lee, Ilgu Hong, Yang Seok Ki
-
Patent number: 11321402Abstract: Indices or data structures used by an enterprise search system are stored across heterogenous storage devices. One or more characteristics associated with a data structure and one or more characteristics associated with a search query operator supported by the data structure are considered when determining which storage device should store each data structure.Type: GrantFiled: August 31, 2017Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLC.Inventors: Olaf René Birkeland, Geir Inge Kristengård, Lars Greger Nordland Hagen
-
Patent number: 11314752Abstract: A computer system includes a first computer and a second computer. The second computer includes, a minimum analysis dataset in which a data item serving as an analysis target and a repetition unit are defined in advance for each analysis target and an agent. The agent receives an analysis target data fetching designation including the minimum analysis dataset, a repetition range of repeating acquisition of data, and a repetition unit. The agent generates a first process that acquires data from the first computer and a first instance that executes processing within the first process on the basis of the repetition range and the repetition unit and activate the first instance to acquire the accumulated data from the first computer. When the processing of the first instance is completed, the agent generates a second process that executes analysis processing and a second instance that executes processing within the second process.Type: GrantFiled: April 18, 2018Date of Patent: April 26, 2022Assignee: HITACHI, LTD.Inventors: Ken Sugimoto, Yoshiki Matsuura, Kei Tanimoto
-
Patent number: 11314637Abstract: To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests.Type: GrantFiled: May 29, 2020Date of Patent: April 26, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Frank R. Dropps, Thomas McGee, Michael Malewicki
-
Patent number: 11314645Abstract: In a cache stash relay, first data, from a producer device, is stashed in a shared cache of a data processing system. The first data is associated with first data addresses in a shared memory of the data processing system. An address pattern of the first data addresses is identified. When a request for second data, associated with a second data address, is received from a processing unit of the data processing system, any data associated with data addresses in the identified address pattern are relayed from the shared cache to a local cache of the processing unit if the second data address is in the identified address pattern. The relaying may include pushing the data from the shared cache to the local cache or a pre-fetcher of the processing unit pulling the data from the shared cache to the local cache in response to a message.Type: GrantFiled: December 16, 2020Date of Patent: April 26, 2022Assignee: Arm LimitedInventors: Curtis Glenn Dunham, Jonathan Curtis Beard
-
Patent number: 11307802Abstract: A computer-implemented method manages I/O queues in a multi-tier storage system. The method includes identifying a set of subsystems in a multi-tier storage system, and each subsystem in the set of subsystems are communicatively connected to the storage system via a non-volatile memory express (NVMe) protocol and correlated to a tier of the multi-tier storage system. The method includes, determining a workload of each extent, wherein each extent of the set of extents are stored on one subsystem and the extents are accessed by an application. The method further includes, mapping, based on the workload of each extent, each extent to a core of the plurality of cores, wherein the mapping is configured to such that each core is balanced. The method includes, establishing, based on the mapping, an IOQ for each extent, wherein the IOQ is processed by the core to which it is mapped.Type: GrantFiled: February 21, 2020Date of Patent: April 19, 2022Assignee: International Business Machines CorporationInventors: Kushal Patel, Sarvesh S. Patel, Subhojit Roy
-
Patent number: 11307854Abstract: A processor of an aspect includes a decode unit to decode an instruction. The instruction is to indicate a destination memory address information. An execution unit is coupled with the decode unit. The execution unit, in response to the decode of the instruction, is to store memory addresses, for at least all initial writes to corresponding data items, which are to occur after the instruction in original program order, to a memory address log. A start of the memory address log is to correspond to the destination memory address information. Other processors, methods, systems, and instructions are also disclosed.Type: GrantFiled: February 7, 2018Date of Patent: April 19, 2022Assignee: Intel CorporationInventors: Kshitij Doshi, Roman Dementiev, Vadim Sukhomlinov
-
Patent number: 11308554Abstract: Systems 100, 1000, methods, and machine-interpretable programming or other instruction products for the management of data transmission by multiple networked computing resources 106, 1106. In particular, the disclosure relates to the synchronization of related requests for transmitting data using distributed network resources.Type: GrantFiled: May 21, 2020Date of Patent: April 19, 2022Assignee: ROYAL BANK OF CANADAInventors: Daniel Aisen, Bradley Katsuyama, Robert Park, John Schwall, Richard Steiner, Allen Zhang, Thomas L. Popejoy
-
Patent number: 11301386Abstract: Disclosed is a computer implemented method and system to dynamically adjust prefetch depth, the method comprising, identifying a first prefetch stream, wherein the first prefetch stream is identified in a prefetch request queue (PRQ), and wherein the first prefetch stream includes a first prefetch depth. The method also comprises determining a number of inflight prefetches, and comparing, a number of prefetch machines against the number of inflight prefetches, wherein each of the prefetch machines is configured to monitor one prefetch request. The method further includes adjusting, in response to the comparing, the first prefetch depth of the first prefetch stream.Type: GrantFiled: August 1, 2019Date of Patent: April 12, 2022Assignee: International Business Machines CorporationInventors: Mohit Karve, Vivek Britto, George W. Rohrbaugh, III
-
Patent number: 11294595Abstract: An adaptive-feedback-based read-look-ahead management system and method are provided. In one embodiment, a method for stream management is presented that is performed in a storage system. The method comprises performing a read look ahead operation for each of a plurality of streams; determining a success rate of the read look ahead operation of each of the plurality of streams; and allocating more of the memory for a stream that has a success rate above a threshold than for a stream that has a success rate below the threshold. Other embodiments are provided.Type: GrantFiled: December 18, 2018Date of Patent: April 5, 2022Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, Ariel Navon, Alexander Bazarsky
-
Patent number: 11282095Abstract: In some embodiments, apparatuses and methods are provided to enable wide access to numerous different previously compiled forecast modeling. In some embodiments, a system is provided that enables wide access to forecasting, comprising: a forecast model database that maintains numerous different forecast models that when run produce resulting forecast data relevant to making business decisions; and a forecasting interface system configured to receive multiple different forecast requests for forecast request data, which comprises a forecast model index comprising identifiers of the numerous different predefined forecast models and for each of the numerous different forecast models relevance characteristics, wherein the forecasting interface system selects, for each received forecast request, a forecast model of the numerous different forecast models based on a relationship between the corresponding forecast request data and the relevance characteristics.Type: GrantFiled: November 11, 2019Date of Patent: March 22, 2022Assignee: Walmart Apollo, LLCInventors: Christopher M. Johnson, Ting Li
-
Patent number: 11281585Abstract: Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be coupled to a processor, which includes a memory controller. The memory controller may determine whether targeting of first data and second data by the processor to perform an operation results in processor-side cache misses. When targeting of the first data and the second data result in processor-side cache misses, the memory controller may determine a single memory access request that requests return of both the first data and the second data and instruct the processor to output the single memory access request to a memory system via one or more data buses coupled between the processor and the memory system to enable processing circuitry implemented in the processor to perform the operation based at least in part on the first data and the second data when returned from the memory system.Type: GrantFiled: May 31, 2019Date of Patent: March 22, 2022Assignee: Micron Technology, Inc.Inventor: Harold Robert George Trout
-
Patent number: 11281502Abstract: A method for dispatching tasks on processor cores based on memory access efficiency is disclosed. The method identifies a task and a memory area to be accessed by the task. The method may use one or more of a compiler, code knowledge, and run-time statistics to identify the memory area that is accessed by the task. The method identifies multiple processor cores that are candidates to execute the task and identifies a particular processor core from the multiple processor cores that provides most efficient access to the memory area. The method dispatches the task to execute on the particular processor core that is deemed most efficient. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 22, 2020Date of Patent: March 22, 2022Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Matthew J. Kalos, Kevin J. Ash, Trung N. Nguyen
-
Patent number: 11275509Abstract: A computer system comprising: a data storage medium comprising a plurality of storage devices configured to store data; and a data storage controller coupled to the data storage medium; wherein the data storage controller is configured to: receive read and write requests targeted to the data storage medium; schedule said read and write requests for processing by said plurality of storage devices; detect a given device of the plurality of devices is exhibiting an unscheduled behavior comprising variable performance by one or more of the plurality of storage devices, wherein the variable performance comprises at least one of a relatively high response latency or relatively low throughput; and schedule one or more reactive operations in response to detecting the occurrence of the unscheduled behavior, said one or more reactive operations being configured to cause the given device to enter a known state.Type: GrantFiled: February 4, 2019Date of Patent: March 15, 2022Assignee: Pure Storage, Inc.Inventors: John Colgrove, Craig Harmer, John Hayes, Bo Hong, Ethan Miller, Feng Wang
-
Patent number: 11263133Abstract: Coherency control circuitry (10) supports processing of a safe-speculative-read transaction received from a requesting master device (4). The safe-speculative-read transaction is of a type requesting that target data is returned to a requesting cache (11) of the requesting master device (4) while prohibiting any change in coherency state associated with the target data in other caches (12) in response to the safe-speculative-read transaction. In response, at least when the target data is cached in a second cache associated with a second master device, at least one of the coherency control circuitry (10) and the second cache (12) is configured to return a safe-speculative-read response while maintaining the target data in the same coherency state within the second cache. This helps to mitigate against speculative side-channel attacks.Type: GrantFiled: March 12, 2019Date of Patent: March 1, 2022Assignee: Arm LimitedInventors: Andreas Lars Sandberg, Stephan Diestelhorst, Nikos Nikoleris, Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre
-
Patent number: 11263138Abstract: An apparatus is provided that includes cache circuitry that comprises a plurality of cache lines. The cache circuitry treats one or more of the cache lines as trace lines each having correlated addresses and each being tagged by a trigger address. Prefetch circuitry causes data at the correlated addresses stored in the trace lines to be prefetched.Type: GrantFiled: October 31, 2018Date of Patent: March 1, 2022Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Miles Robert Dooley, Michael Filippo
-
Patent number: 11256626Abstract: Apparatus, method, and system for enhancing data prefetching based on non-uniform memory access (NUMA) characteristics are described herein. An apparatus embodiment includes a system memory, a cache, and a prefetcher. The system memory includes multiple memory regions, at least some of which are associated with different NUMA characteristic (access latency, bandwidth, etc.) than others. Each region is associated with its own set of prefetch parameters that are set in accordance to their respective NUMA characteristics. The prefetcher monitors data accesses to the cache and generates one or more prefetch requests to fetch data from the system memory to the cache based on the monitored data accesses and the set of prefetch parameters associated with the memory region from which data is to be fetched. The set of prefetcher parameters may include prefetch distance, training-to-stable threshold, and throttle threshold.Type: GrantFiled: April 1, 2020Date of Patent: February 22, 2022Assignee: Intel CorporationInventors: Wim Heirman, Ibrahim Hur, Ugonna Echeruo, Stijn Eyerman, Kristof Du Bois
-
Patent number: 11256623Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.Type: GrantFiled: February 8, 2017Date of Patent: February 22, 2022Assignee: ARM LIMITEDInventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce, Michael Filippo, Paul Gilbert Meyer, Alex James Waugh, Geoffray Matthieu Lacourba
-
Patent number: 11249909Abstract: Systems and methods to predict and prefetch a cache access based on a delta pattern are disclosed. The delta pattern may comprise a sequence of differences between first and second cache accesses within a page. In one example, a processor includes execution circuitry to extract a delta history corresponding to a delta pattern associated with one or more previous cache accesses corresponding to a page of memory. The processor execution circuitry further generates a bucketed delta history based on the delta history corresponding to the page of memory and selects a prediction entry based on the bucketed delta history. The processor execution circuitry then identifies one or more prefetch candidates based on a confidence threshold, with the confidence threshold indicating one or more probable delta patterns, and filters the one or more prefetch candidates. Prefetch circuitry of the processor then predicts and prefetches a cache access based the one or more prefetch candidates.Type: GrantFiled: December 28, 2018Date of Patent: February 15, 2022Assignee: Intel CorporationInventors: Hanna Alam, Joseph Nuzman
-
Patent number: 11243884Abstract: A method of prefetching target data includes, in response to detecting a lock-prefixed instruction for execution in a processor, determining a predicted target memory location for the lock-prefixed instruction based on control flow information associating the lock-prefixed instruction with the predicted target memory location. Target data is prefetched from the predicted target memory location to a cache coupled with the processor, and after completion of the prefetching, the lock-prefixed instruction is executed in the processor using the prefetched target data.Type: GrantFiled: November 13, 2018Date of Patent: February 8, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Susumu Mashimo, John Kalamatianos
-
Patent number: 11243885Abstract: Provided are a computer program product, system, and method for providing track access reasons for track accesses resulting in the release of prefetched cache resources for the track. A first request for a track is received from a process for which prefetched cache resources to a cache are held for a second request for the track that is expected. A track access reason is provided for the first request specifying a reason for the first request. The prefetched cache resources are released before the second request to the track is received. Indication is made in an unexpected released track list of the track and the track access reason for the first request.Type: GrantFiled: August 4, 2020Date of Patent: February 8, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Beth Ann Peterson, Chung Man Fung, Matthew J. Kalos, Warren Keith Stanley, Matthew J. Ward
-
Patent number: 11231928Abstract: Devices and techniques are disclosed herein for more efficiently exchanging large amounts of data between a host and a storage system. In an example, a large read operation can include receiving a pre-fetch command, a parameter list and a read command at a storage system. In certain examples, the pre-fetch command can provide an indication of the length of the parameter list, and the parameter list can provide location identifiers of the storage system from which the read command can sense the read data.Type: GrantFiled: October 3, 2019Date of Patent: January 25, 2022Assignee: Micron Technology, Inc.Inventors: Qing Liang, Nadav Grosz
-
Patent number: 11232533Abstract: Embodiments are generally directed to memory prefetching in multiple GPU environment. An embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (GPUs) to process data, each of the GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the GPUs is to prefetch data from the memory to the cache of the GPU; and wherein the prefetcher of a GPU is prohibited from prefetching from a page that is not owned by the GPU or by the host processor.Type: GrantFiled: March 15, 2019Date of Patent: January 25, 2022Assignee: INTEL CORPORATIONInventors: Joydeep Ray, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Nicolas Galoppo von Borries, Varghese George, Altug Koker, Elmoustapha Ould-Ahmed-Vall, Mike Macpherson, Subramaniam Maiyuran
-
Patent number: 11228658Abstract: Systems and methods for processing requests to execute a program code of a user use a message queue service to store requests when there are not enough resources to process the requests. The message queue service determines whether a request to be queued is associated with data that the program code needs in order to process the request. If so, the message queue service locates and retrieves the data and stores the data in a cache storage that provides faster access by the program code to the pre-fetched data. This provides faster execution of asynchronous instances of the program code.Type: GrantFiled: January 6, 2020Date of Patent: January 18, 2022Assignee: Amazon Technologies, Inc.Inventor: Nima Sharifi Mehr
-
Patent number: 11227220Abstract: Methods and systems for automatically discovering data types required by a computer-based rule engine for evaluating a transaction request are presented. Multiple potential paths for evaluating the transaction request according to the rule engine are determined. An abstract syntax tree may be generated based on the rule engine to determine the multiple potential paths. Based on an initial set of data extracted from the transaction request, one or more potential paths that are determined to be irrelevant to evaluating the transaction request are identified. Types of data required to evaluate the transaction request according to the remaining potential paths are determined. Only data that corresponds to the determined types of data is retrieved to evaluate the transaction request.Type: GrantFiled: December 15, 2017Date of Patent: January 18, 2022Assignee: PayPal, Inc.Inventors: Srinivasan Manoharan, Sahil Dahiya, Vinesh Chirakkil, Gurinder Grewal, Harish Nalagandla, Christopher S. Purdum, Girish Sharma
-
Patent number: 11221762Abstract: A processor includes a first memory interface to be coupled to a plurality of memory module sockets located off-package, a second memory interface to be coupled to a non-volatile memory (NVM) socket located off-package, and a multi-level memory controller (MLMC). The MLMC is to: control the memory modules disposed in the plurality of memory module sockets as main memory in a one-level memory (1LM) configuration; detect a switch from a 1LM mode of operation to a two-level memory (2LM) mode of operation in response to a basic input/output system (BIOS) detection of a low-power memory module disposed in one of the memory module sockets and a NVM device disposed in the NVM socket in a 2LM configuration; and control the low-power memory module as cache in the 2LM configuration in response to detection of the switch from the 1LM mode of operation to the 2LM mode of operation.Type: GrantFiled: February 13, 2019Date of Patent: January 11, 2022Assignee: Intel CorporationInventors: Joydeep Ray, Varghese George, Inder M. Sodhi, Jeffrey R. Wilcox
-
Patent number: 11210093Abstract: Devices and techniques are disclosed herein for more efficiently exchanging large amounts of data between a host and a storage system. In an example, a read command can optionally include a read-type indicator. The read-type indicator can allow for exchange of a large amount of data between the host and the storage system using a single read command.Type: GrantFiled: September 9, 2019Date of Patent: December 28, 2021Assignee: Micron Technology, Inc.Inventors: Qing Liang, Nadav Grosz
-
Patent number: 11200057Abstract: An arithmetic processing apparatus includes: a memory; and a processor coupled to the memory, wherein the processor: detects whether intervals of a plurality of addresses to be accessed by a memory access instruction that performs memory access to the plurality of addresses by a single instruction are all the same; decodes the memory access instruction as the single instruction when detecting that the intervals are all the same; decodes the memory access instruction as a plurality of instructions when detecting that the intervals are not all the same; and performs the memory access in accordance with the single instruction or the plurality of instructions.Type: GrantFiled: April 27, 2018Date of Patent: December 14, 2021Assignee: FUJITSU LIMITEDInventor: Shingo Watanabe
-
Patent number: 11200500Abstract: Methods and systems for using machine learning to automatically determine a data loading configuration for a computer-based rule engine are presented. The computer-based rule engine is configured to use rules to evaluate incoming transaction requests. Data of various data types may be required by the rule engine when evaluating the incoming transaction requests. The data loading configuration specifies pre-loading data associated with at least a first data type and lazy-loading data associated with at least a second data type. Statistical data such as use rates and loading times associated with the various data types may be supplied to a machine learning module to determine a particular loading configuration for the various data types. The computer-based rule engine then loads data according to the data loading configuration when evaluating a subsequent transaction request.Type: GrantFiled: March 30, 2018Date of Patent: December 14, 2021Assignee: PayPal, Inc.Inventors: Srinivasan Manoharan, Vinesh Chirakkil, Jun Zhu, Christopher S. Purdum, Sahil Dahiya, Gurinder Grewal, Harish Nalagandla, Girish Sharma
-
Patent number: 11194504Abstract: Efficient pre-reading is performed in data transmission and reception between an Edge node and a Core node. An information processing device includes a storage device, outputs client request data based on a request of a client, and stores predetermined pre-read data in the storage device before the request of the client. The device includes: a relevance calculation module configured to calculate relevance between data based on an access history of the data; and a pre-reading and deletion module configured to determine data to be deleted from the storage device using the relevance when data having predetermined relevance with the client request data is to be stored to the storage device as the pre-read data and a storage capacity of the storage device is insufficient if at least one of the client request data and the pre-read data is to be stored to the storage device.Type: GrantFiled: April 7, 2020Date of Patent: December 7, 2021Assignee: HITACHI, LTD.Inventors: Kazumasa Matsubara, Mitsuo Hayasaka
-
Patent number: 11188319Abstract: The present application is directed towards systems and methods for identifying and grouping code objects into functional areas with boundaries crossed by entry points. An analysis agent may select a first functional area of a source installation of an application to be transformed to a target installation of the application from a plurality of functional areas of the source installation, each functional area comprising a plurality of associated code objects; and identify a first subset of the plurality of associated code objects of the first functional area having associations only to other code objects of the first functional area, and a second subset of the plurality of associated code objects of the first functional area having associations to code objects in additional functional areas, the second subset comprising entry points of the first functional area.Type: GrantFiled: June 30, 2020Date of Patent: November 30, 2021Assignee: SMARTSHIFT TECHNOLOGIES, INC.Inventors: Albrecht Gass, Stefan Hetges, Nikolaos Faradouris, Oliver Flach
-
Patent number: 11176045Abstract: In an embodiment, a processor includes a plurality of prefetch circuits configured to prefetch data into a data cache. A primary prefetch circuit may be configured to generate first prefetch requests in response to a demand access, and may be configured to invoke a second prefetch circuit in response to the demand access. The second prefetch circuit may implement a different prefetch mechanism than the first prefetch circuit. If the second prefetch circuit reaches a threshold confidence level in prefetching for the demand access, the second prefetch circuit may communicate an indication to the primary prefetch circuit. The primary prefetch circuit may reduce a number of prefetch requests generated for the demand access responsive to the communication from the second prefetch circuit.Type: GrantFiled: March 27, 2020Date of Patent: November 16, 2021Assignee: Apple Inc.Inventors: Stephan G. Meier, Tyler J. Huberty, Nikhil Gupta
-
Patent number: 11169923Abstract: The method for performing read-ahead operations in the data storage systems is disclosed and includes determining a sequential address space interval of a request and a time of the request, placing the data into a read-ahead interval list if the address space interval exceeds a threshold, and placing the data about request intervals having a length shorter than the threshold into a random request interval list, identifying a partial overlap between the address space interval of the current request and the interval stored in one of the lists, verifying whether the length of the address space interval exceeds a threshold and if so—placing the data about this sequential interval into the read-ahead interval list, performing read-ahead of data.Type: GrantFiled: July 16, 2018Date of Patent: November 9, 2021Assignee: RAIDIXInventors: Evgeny Evgenievich Anastasiev, Svetlana Viktorovna Lazareva
-
Patent number: 11170463Abstract: Methods, systems, apparatus, and articles of manufacture to reduce memory latency when fetching pixel kernels are disclosed. An example apparatus includes a prefetch kernel retriever to generate a block tag based on a first request from a hardware accelerator, the first request including first coordinates of a first pixel disposed in a first image block, a memory interface engine to store the first image block including a plurality of pixels including the pixel in a cache storage based on the block tag, and a kernel retriever to access two or more memory devices included in the cache storage in parallel to transfer a plurality of image blocks including the first image block when a second request is received including second coordinates of a second pixel disposed in the first image block.Type: GrantFiled: May 18, 2018Date of Patent: November 9, 2021Assignee: MOVIDIUS LIMITEDInventors: Richard Boyd, Richard Richmond
-
Patent number: 11163683Abstract: Disclosed is a computer implemented method to dynamically adjust prefetch depth, the method comprising sending, to a first prefetch machine, a first prefetch request configured to fetch a first data address from a first stream at a first depth to a lower level cache. The method also comprises sending, to a second prefetcher, a second prefetch request configured to fetch the first data address from the first stream at a second depth to a highest-level cache. The method further comprises determining the first data address is not in the lower level cache, determining, that the first prefetch request is in the first prefetch machine, and determining, in response to the first prefetch request being in the first prefetch machine, that the first stream is at steady state. The method comprises adjusting, in response to determining that the first stream is at steady state, the first depth.Type: GrantFiled: August 1, 2019Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Mohit Karve, Edmund Joseph Gieske, Vivek Britto, George W. Rohrbaugh, III
-
Patent number: 11157283Abstract: A graphics processing device comprises a set of compute units to execute multiple threads of a workload, a cache coupled with the set of compute units, and a prefetcher to prefetch instructions associated with the workload. The prefetcher is configured to use a thread dispatch command that is used to dispatch threads to execute a kernel to prefetch instructions, parameters, and/or constants that will be used during execution of the kernel. Prefetch operations for the kernel can then occur concurrently with thread dispatch operations.Type: GrantFiled: January 9, 2019Date of Patent: October 26, 2021Assignee: Intel CorporationInventors: James Valerio, Vasanth Ranganathan, Joydeep Ray, Pradeep Ramani
-
Patent number: 11146656Abstract: In some embodiments, an electronic device is disclosed for intelligently prefetching data via a computer network. The electronic device can include a device housing, a user interface, a memory device, and a hardware processor. The hardware processor can: communicate via a communication network; determine that the hardware processor is expected to be unable to communicate via the communication network; responsive to determining that the hardware processor is expected to be unable to communicate via the communication network, determine prefetch data to request prior to the hardware processor being unable to communicate via the communication network; request the prefetch data; receive and store the prefetch data prior to the hardware processor being unable to communicate via the communication network; and subsequent to the hardware processor being unable to communicate via the communication network, process the prefetch data with an application responsive to processing a first user input with the application.Type: GrantFiled: December 18, 2020Date of Patent: October 12, 2021Assignee: Tealium Inc.Inventors: Craig P. Rouse, Harry Cassell, Christopher B. Slovak
-
Patent number: 11144574Abstract: A temporal DB that stores data having been stored in a DB of a mainframe is provided in a DB dedicated device 20. During a DB update, when an application on a mainframe issues an update SQL, a DBMS updates the DB and stores an update log, and an update-log capturing unit periodically reads out the update log. In the DB dedicated device 20, an update-log applying unit updates the temporal DB based on the update log. During DB reference, when the application on the mainframe issues an inquiry SQL with inquiry target time attached, the DBMS transfers the inquiry SQL to the inquiry processing unit. In the DB dedicated device, the inquiry processing unit inquires the temporal DB about data for the inquiry target time and returns an inquiry result to the DBMS.Type: GrantFiled: November 11, 2015Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Ritsuko Boh, Noriaki Kohno
-
Patent number: 11138116Abstract: A network interface device comprises a programmable interface configured to provide a device interface with at least one bus between the network interface device and a host device. The programmable interface is programmable to support a plurality of different types of a device interface.Type: GrantFiled: July 29, 2019Date of Patent: October 5, 2021Assignee: XILINX, INC.Inventors: Steven L. Pope, Dmitri Kitariev, David J. Riddoch, Derek Roberts, Neil Turton
-
Patent number: 11119694Abstract: The invention discloses a solid-state drive control device and a learning-based solid-state drive data access method, wherein the method comprises the steps of: presetting a hash table, the hash table comprising more than one hash value, the hash value is used to record and represent data characteristics of data pages in the solid-state drive. Obtaining an I/O data stream of the solid-state drive, and obtaining a hash value corresponding to the I/O data stream in the hash table. Predicting a sequence of data pages and/or data pages that are about to be accessed by a preset first learning model. Prefetching data is performed in the solid-state drive based on an output result of the first learning model. Through the embodiment of the present invention, when predicting prefetched data, learning can be performed in real time to adapt to different application categories and access modes through adaptive adjustment parameters, so that better data prefetching performance can be obtained.Type: GrantFiled: January 17, 2019Date of Patent: September 14, 2021Assignee: SHENZHEN DAPU MICROELECTRONICS CO., LTD.Inventors: Jing Yang, Haibo He, Qing Yang
-
Patent number: 11119925Abstract: Apparatus comprising cache storage and a method of operating such a cache storage are provided. Data blocks in the cache storage have capability metadata stored in association therewith identifying whether the data block specifies a capability or a data value. At least one type of capability is a bounded pointer. Responsive to a write to a data block in the cache storage a capability metadata modification marker is set in association with the data block, indicative of whether the capability metadata associated with the data block has changed since the data block was stored in the cache storage. This supports the security of the system, such that modification of the use of a data block from a data value to a capability cannot take place unless intended. Efficiencies may also result when capability metadata is stored separately from other data in memory, as fewer accesses to memory can be made.Type: GrantFiled: April 19, 2018Date of Patent: September 14, 2021Assignee: Arm LimitedInventors: Stuart David Biles, Graeme Peter Barnes