Look-ahead Patents (Class 711/137)
  • Patent number: 10719441
    Abstract: An electronic device handles memory access requests for data in a memory. The electronic device includes a memory controller for the memory, a last-level cache memory, a request generator, and a predictor. The predictor determines a likelihood that a cache memory access request for data at a given address will hit in the last-level cache memory. Based on the likelihood, the predictor determines: whether a memory access request is to be sent by the request generator to the memory controller for the data in parallel with the cache memory access request being resolved in the last-level cache memory, and, when the memory access request is to be sent, a type of memory access request that is to be sent. When the memory access request is to be sent, the predictor causes the request generator to send a memory request of the type to the memory controller.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: July 21, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Jieming Yin, Yasuko Eckert, Matthew R. Poremba, Steven E. Raasch, Doug Hunt
  • Patent number: 10719445
    Abstract: Systems and methods for permitting flexible use of volatile memory for storing read command prediction data in a memory device, or in a host memory buffer accessible by the memory device, while preserving accuracy in predicting read commands and pre-fetching data are disclosed. The read command prediction data may be in the form of history pattern match table having entries indexed to a search sequence of one or more commands historically preceding the read command in the indexed table entry. A host trigger requesting the limited volatile memory space, a lower power state that is detected, or a memory device-initiated need may trigger generation of and subsequent use of a smaller table for the prediction process while the larger table is released. The memory device may later regenerate the larger table when more space in the volatile memory becomes available.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: July 21, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Ariel Navon, Shay Benisty, Alex Bazarsky
  • Patent number: 10713750
    Abstract: An apparatus to facilitate cache replacement is disclosed. The apparatus includes a cache memory and cache replacement logic to manage data in the cache memory. The cache replacement logic includes tracking logic to track addresses accessed at the cache memory and replacement control logic to monitor the tracking logic and apply a replacement policy based on information received from the tracking logic.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: July 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Joydeep Ray, Abhishek R. Appu, Vasanth Ranganathan
  • Patent number: 10713187
    Abstract: A memory controller comprises memory access circuitry configured to initiate a data access of data stored in a memory in response to a data access hint message received from another node in data communication with the memory controller; to access data stored in the memory in response to a data access request received from another node in data communication with the memory controller and to provide the accessed data as a data access response to the data access request.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: July 14, 2020
    Assignee: ARM Limited
    Inventors: Michael Filippo, Jamshed Jalal, Klas Magnus Bruce, Paul Gilbert Meyer, David Joseph Hawkins, Phanindra Kumar Mannava, Joseph Michael Pusdesris
  • Patent number: 10705762
    Abstract: Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be communicative coupled to a processor via data buses. The memory system may include a memory array that stores first data at a first storage location and second data at a second storage location. The memory may include a memory controller, which receives a memory access request that requests return of the first data and the second data, determines a data access pattern resulting from the memory access request, determines an access pointer that identifies the first storage location of the first data and the second storage location of the second data, and instructs the memory system to use the access pointer to identify and output the first data and the second data via the data buses to enable the processor to perform an operation based on the first data and the second data.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: July 7, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Harold Robert George Trout
  • Patent number: 10691593
    Abstract: Techniques for implementing an apparatus, which includes a memory system that provides data storage via multiple hierarchical memory levels, are provided. The memory system includes a cache that implements a first memory level and a memory array that implements a second memory level higher than the first memory level. Additionally, the memory system includes one or more memory controllers that determine a predicted data access pattern expected to occur during an upcoming control horizon, based at least in part on first context of first data to be stored in the memory sub-system, second context of second data previously stored in the memory system, or both, and control what one or more memory levels of the multiple hierarchical memory levels implemented in the memory system in which to store the first data, the second data, or both based at least in part on the predicted data access pattern.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: June 23, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Anton Korzh
  • Patent number: 10691472
    Abstract: The present invention has an object of providing a user interface execution apparatus and a user interface designing apparatus which can estimate the maximum size of a storage area for storing data to be prefetched when a user interface is designed and can present updated data to the user even when the prefetched data is updated after the prefetch. A user interface execution apparatus in the present invention includes a processor to execute a program; and a memory to store the program which, when executed by the processor, performs processes of: transitioning a state of the user interface execution apparatus; issuing a prefetch request for data; storing the data; generating the code from an interface definition and a state transition definition; and selecting, before transitioning the state, data to be prefetched based on a difference between a data obtaining interface to be used in a state before the transitioning and a data obtaining interface to be used in a state after the transitioning.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: June 23, 2020
    Assignee: Mitsubishi Electric Corporation
    Inventors: Kohei Tanaka, Yoshiaki Kitamura, Akira Toyooka, Mitsuo Shimotani, Yukio Goto
  • Patent number: 10684949
    Abstract: A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: June 16, 2020
    Assignee: FLC Global, Ltd.
    Inventor: Sehat Sutardja
  • Patent number: 10678442
    Abstract: Aspects of the present disclosure generally relate to storage devices and methods of operating the same. In one aspect, a storage device includes a disk, and a head configured to write data to and read data from the disk. The storage device also includes a controller configured to receive a read command in a host command queue, store the read command in a disk queue, and determine whether the host command queue is full of pending read commands, including the received read command. If the host command queue is full of pending read commands, the controller forces execution of one of the pending read commands.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: June 9, 2020
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takeyori Hara, Richard M. Ehrlich, Siri S. Weerasooriya
  • Patent number: 10671361
    Abstract: Technologies relating to determining data variable dependencies to facilitate code execution are disclosed. An example method includes: identifying a set of programming statements having a plurality of data parameters; identifying first data parameters associated with a first programming statement in the set of programming statements; determining one or more parameter dependencies associated with the first data parameters; and determining, based on the one or more parameter dependencies, a first execution performance of the first programming statement. The method may further include: determining a second execution performance of the second programming statement and scheduling execution of the first programming statement and of the second programming statement based on the first and second execution performances.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: June 2, 2020
    Assignee: PayPal, Inc.
    Inventors: Xin Li, Weijia Deng, Shuan Yang, Feng Chen, Jin Yao, Zhijun Ling, Yunfeng Li, Xiaohan Yun, Yang Yu
  • Patent number: 10671536
    Abstract: A method and apparatus for pre-fetching data into a cache using a hardware element that includes registers for receiving a reference for an initial pre-fetch and a stride-indicator. The initial pre-fetch reference allows for direct pre-fetch of a first portion of memory. A stride-indicator is also received and is used along with the initial pre-fetch reference in order to generate a new pre-fetch reference. The new pre-fetch reference is used to fetch a second portion of memory.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: June 2, 2020
    Inventors: Ananth Jasty, Indraneil Gokhale
  • Patent number: 10664280
    Abstract: A fetch ahead branch target buffer is used by a branch predictor to determine a target address for a branch instruction based on a fetch pointer for a previous fetch bundle, i.e. a fetch bundle which is fetched prior to a fetch bundle which includes the branch instruction. An entry in the fetch ahead branch target buffer corresponds to one branch instruction and comprises a data portion identifying the target address of that branch instruction. In various examples, an entry also comprises a tag portion which stores data identifying the fetch pointer by which the entry is indexed. Branch prediction is performed by matching an index generated using a received fetch pointer to the tag portions to identify a matching entry and then determining the target address for the branch instruction from the data portion of the matching entry.
    Type: Grant
    Filed: November 9, 2015
    Date of Patent: May 26, 2020
    Assignee: MIPS Tech, LLC
    Inventors: Parthiv Pota, Sanjay Patel, Sudhakar Ranganathan
  • Patent number: 10656840
    Abstract: Systems, methods and/or devices are used to enable real-time I/O pattern recognition to enhance performance and endurance of a storage device. In one aspect, the method includes (1) at a storage device, receiving from a host a plurality of input/output (I/O) requests, the I/O requests specifying operations to be performed in a plurality of regions in a logical address space of the host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) maintaining a history of I/O request patterns in the region for a predetermined time period, and (b) using the history of I/O request patterns in the region to adjust subsequent I/O processing in the region.
    Type: Grant
    Filed: July 3, 2014
    Date of Patent: May 19, 2020
    Assignee: SanDisk Technologies LLC
    Inventors: Dharani Kotte, Akshay Mathur, Chayan Biswas, Baskaran Kannan, Sumant K. Patro
  • Patent number: 10657059
    Abstract: Controlling a rate of prefetching based on bus bandwidth. A determination is made as to whether a rate of prefetching data from memory into a cache is to be changed. This determination is based on bus utilization. Based on determining that the rate is to be changed, the rate of prefetching is changed.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Chung-Lung K. Shum
  • Patent number: 10649776
    Abstract: Systems and methods for predicting read commands and pre-fetching data when a memory device is receiving random read commands to non-sequentially addressed data locations are disclosed. A limited length search sequence of prior read commands is generated and that search sequence is then converted into an index value in a predetermined set of index values. A history pattern match table having entries indexed to that predetermined set of index values contains a plurality of read commands that have previously followed the search sequence represented by the index value. The index value is obtained via application of a many-to-one algorithm to the search sequence. The index value obtained from the search sequence may be used to find, and pre-fetch data for, a plurality of next read commands in the table that previously followed a search sequence having that index value.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: May 12, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Ariel Navon, Eran Sharon, Idan Alrod
  • Patent number: 10635592
    Abstract: Controlling a rate of prefetching based on bus bandwidth. A determination is made as to whether a rate of prefetching data from memory into a cache is to be changed. This determination is based on bus utilization. Based on determining that the rate is to be changed, the rate of prefetching is changed.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: April 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Chung-Lung K. Shum
  • Patent number: 10621096
    Abstract: Implementations described and claimed herein provide a method and system for managing execution of commands for a storage device, the method comprising identifying individual streams processing read ahead operations in a storage controller, determining an amount of read ahead data that each individual stream is processing in the read ahead operations, determining a total amount of read cache available for the storage controller, and determining a total amount of read ahead data that all the individual streams are processing in the read ahead operations.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: April 14, 2020
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Zachary D. Traut, Michael D. Barrell
  • Patent number: 10621099
    Abstract: Apparatus, method, and system for enhancing data prefetching based on non-uniform memory access (NUMA) characteristics are described herein. An apparatus embodiment includes a system memory, a cache, and a prefetcher. The system memory includes multiple memory regions, at least some of which are associated with different NUMA characteristic (access latency, bandwidth, etc.) than others. Each region is associated with its own set of prefetch parameters that are set in accordance to their respective NUMA characteristics. The prefetcher monitors data accesses to the cache and generates one or more prefetch requests to fetch data from the system memory to the cache based on the monitored data accesses and the set of prefetch parameters associated with the memory region from which data is to be fetched. The set of prefetcher parameters may include prefetch distance, training-to-stable threshold, and throttle threshold.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Wim Heirman, Ibrahim Hur, Ugonna Echeruo, Stijn Eyerman, Kristof Du Bois
  • Patent number: 10614033
    Abstract: Embodiments are directed to managing data in a file system. A pre-fetch engine may receive requests from a client the file system that includes a pre-fetch storage tier and a file storage tier of storage devices. The pre-fetch engine determines a pre-fetch policy based on the requests such that the pre-fetch policy determines which blocks to copy to the pre-fetch storage tier. And, the pre-fetch policy may be associated with a score model that includes score rules where one of the rules may be associated with a client score. The pre-fetch engine may obtain scores associated with the score rules such that the scores are based on previous requests made by the client. In response to scores exceeding a threshold value, the pre-fetch engine may copy the blocks to the pre-fetch storage tier. The pre-fetch engine may update the scores based on the performance of the pre-fetch policy.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: April 7, 2020
    Assignee: Qumulo, Inc.
    Inventors: Thomas Gregory Rothschilds, Thomas R. Unger, Eric E. Youngblut, Peter J. Godman
  • Patent number: 10606752
    Abstract: Embodiments include a method and system for coordinating cache management for an exclusive cache hierarchy. The method and system may include managing, by a coordinated cache logic section, a level three (L3) cache, a level two (L2) cache, and/or a level one (L1) cache. Managing the L3 cache and the L2 cache may include coordinating a cache block replacement policy among the L3 cache and the L2 cache by filtering data with lower reuse probability from data with higher reuse probability. The method and system may include tracking reuse patterns of demand requests separately from reuse patterns of prefetch requests. Accordingly, a coordinated cache management policy may be built across multiple levels of a cache hierarchy, rather than a cache replacement policy within one cache level. Higher-level cache behavior may be used to guide lower-level cache allocation, bringing greater visibility of cache behavior to exclusive last level caches (LLCs).
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: March 31, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yingying Tian, Tarun Nakra, Khang Nguyen, Ravikanth Reddy, Edwin Silvera
  • Patent number: 10592154
    Abstract: Accessing a portion of data that was previously migrated to a cloud service includes initiating a recall of the data from the cloud service in response to the data residing entirely on the cloud service, determining if the portion of the data is stored on the storage device, retrieving cloud objects from the cloud service corresponding to the portion of the data in response to the portion of the data being unavailable on the storage device, and accessing the portion of the data on the storage device while cloud objects corresponding to other portions of the data are being transferred from the cloud service to the storage device. The host may receive a migrated status indicator in response to the data existing entirely on the cloud service. Initiating the recall may include modifying metadata to indicate that the data is available for access by the host.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: March 17, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Douglas E. LeCrone, Brett A. Quinn
  • Patent number: 10593418
    Abstract: Examples of the present disclosure provide apparatuses and methods related to performing comparison operations in a memory. An example apparatus might include a first group of memory cells coupled to a first access line and configured to store a first element. An example apparatus might also include a second group of memory cells coupled to a second access line and configured to store a second element. An example apparatus might also include sensing circuitry configured to compare the first element with the second element by performing a number of AND operations, OR operations, SHIFT operations, and INVERT operations without transferring data via an input/output (I/O) line.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: March 17, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Sanjay Tiwari
  • Patent number: 10592249
    Abstract: Processing of an instruction fetch from an instruction cache is provided, which includes: determining whether the next instruction fetch is from a same address page as a last instruction fetch from the instruction cache; and based, at least in part, on determining that the next instruction fetch is from the same address page, suppressing for the next instruction fetch an instruction address translation table access, and comparing for an address match results of an instruction directory access for the next instruction fetch with buffered results of a most-recent, instruction address translation table access for a prior instruction fetch from the instruction cache.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: March 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10579530
    Abstract: In an embodiment, a processor includes a plurality of cores, with at least one core including prefetch logic. The prefetch logic comprises circuitry to: receive a prefetch request; compare the received prefetch request to a plurality of entries of a prefetch filter cache; and in response to a determination that the received prefetch request matches one of the plurality of entries of the prefetch filter cache, drop the received prefetch request. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Stanislav Shwartsman, Ron Rais
  • Patent number: 10579531
    Abstract: A system for prefetching data for a processor includes a processor core, a memory configured to store information for use by the processor core, a cache memory configured to fetch and store information from the memory, and a prefetch circuit. The prefetch circuit may be configured to issue a multi-group prefetch request to retrieve information from the memory to store in the cache memory using a predicted address. The multi-group prefetch request may include a depth value indicative of a number of fetch groups to retrieve. The prefetch circuit may also be configured to generate an accuracy value based on a cache hit rate of prefetched information over a particular time interval, and to modify the depth value based on the accuracy value.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: March 3, 2020
    Assignee: Oracle International Corporation
    Inventors: Hyunjin Abraham Lee, Yuan Chou, John Pape
  • Patent number: 10565117
    Abstract: Techniques relate to handling outstanding cache miss prefetches. A processor pipeline recognizes that a prefetch cancelling instruction is being executed. In response to recognizing that the prefetch cancelling instruction is being executed, all outstanding prefetches are evaluated according to a criterion as set forth by the prefetch cancelling instruction in order to select qualified prefetches. In response to evaluating, a cache subsystem is communicated with to cause cancelling of the qualified prefetches that fit the criterion. In response to successful cancelling of the qualified prefetches, a local cache is prevented from being updated from the qualified prefetches.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: February 18, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 10564853
    Abstract: Systems and methods for determining locality of an incoming command relative to previously identified write or read streams is disclosed. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into multiple submission queues. The memory device fetches the commands from the multiple submission queues, which results in the incoming commands being interspersed. In order to determine whether the incoming commands should be assigned to previously identified read or write streams, the locality of the incoming commands relative to the previously identified read or write streams is analyzed. One example of locality is proximity in address space. In response to determining locality, the incoming commands are assigned to the various streams.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: February 18, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Vitali Linkovsky, Shay Benisty, William Guthrie, Scheheresade Virani
  • Patent number: 10558560
    Abstract: Processing prefetch memory operations and transactions. A local processor receives a prefetch request from a remote processor. Prior to execution of the prefetch request, determining whether a priority of the remote processor is greater than a priority of a local processor. The write prefetch request is executed in response to a to a determination that the priority of the remote processor is greater than the priority of the local processor. Prefetch data produced by execution of the prefetch request is provided to the remote processor.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: February 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 10558578
    Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: February 11, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Oluleye Olorode, Ramakrishnan Venkatasubramanian
  • Patent number: 10552155
    Abstract: Livelock recovery circuits configured to detect livelock in a processor, and cause the processor to transition to a known safe state when livelock is detected. The livelock recovery circuits include detection logic configured to detect that the processor is in livelock when the processor has illegally repeated an instruction; and transition logic configured to cause the processor to transition to a safe state when livelock has been detected by the detection logic.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: February 4, 2020
    Assignee: Imagination Technologies Limited
    Inventors: Ashish Darbari, Iain Singleton
  • Patent number: 10545787
    Abstract: Examples are disclosed for composing memory resources across devices. In some examples, memory resources associated with executing one or more applications by circuitry at two separate devices may be composed across the two devices. The circuitry may be capable of executing the one or more applications using a two-level memory (2LM) architecture including a near memory and a far memory. In some examples, the near memory may include near memories separately located at the two devices and a far memory located at one of the two devices. The far memory may be used to migrate one or more copies of memory content between the separately located near memories in a manner transparent to an operating system for the first device or the second device. Other examples are described and claimed.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: January 28, 2020
    Assignee: INTEL CORPORATION
    Inventors: Neven M Abou Gazala, Paul S. Diefenbaugh, Nithyananda S. Jeganathan, Eugene Gorbatov
  • Patent number: 10540278
    Abstract: According to one embodiment, a memory system includes first and second memories, and a controller configured to switch between first and second modes, search whether data of a logical address associated with a read command is stored in the first memory in the first mode, and read the data from the second memory without searching whether the data of the logical address associated with the read command is stored in the first memory in the second mode.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: January 21, 2020
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Naoya Fukuchi
  • Patent number: 10540114
    Abstract: A method, computer program product, and computer system for receiving, by a computing device, an I/O request. A bucket for the I/O request may be allocated. An offset and mapping information of the I/O request may be written into a log. The offset and mapping information of the I/O request may be written into a tree structure. Garbage collection for the tree structure may be executed to reuse the bucket.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: January 21, 2020
    Assignee: EMC IP Holding Company, LLC
    Inventors: Shuo Lv, Wilson Hu, Huan Chen, Zhiqiang Li
  • Patent number: 10534723
    Abstract: A system, method and computer program product are provided for conditionally eliminating a memory read request. In use, a memory read request is identified. Additionally, it is determined whether the memory read request is an unnecessary memory read request. Further, the memory read request is conditionally eliminated, based on the determination.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: January 14, 2020
    Assignee: Mentor Graphics Corporation
    Inventors: Nikhil Tripathi, Venky Ramachandran, Malay Haldar, Sumit Roy, Anmol Mathur, Abhishek Roy, Mohit Kumar
  • Patent number: 10534713
    Abstract: Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by determining an evaluation of the prefetch request prior to execution of the program instructions included in the prefetch request. The evaluation is based, at least in part, on (i) a comparison of a priority of the prefetch request with a priority of the transaction and (ii) a condition that exists in one or both of the local processor and the remote processor. Based on the evaluation, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: January 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 10528489
    Abstract: The present disclosure provides methods, apparatuses, and systems for implementing and operating a memory module, for example, in a computing device that includes a network interface, which is coupled to a network to enable communication with a client device, and processing circuitry, which is coupled to the network interface via a data bus and programmed to perform operations based on user inputs received from the client device. The memory module includes memory devices, which may be non-volatile memory or volatile memory, and a memory controller coupled between the data bus and the of memory devices. The memory controller may be programmed to determine when the processing circuitry is expected to request a data block and control data storage in the memory devices.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: January 7, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Richard C. Murphy
  • Patent number: 10528289
    Abstract: A data storage device includes a flash memory and a controller. The flash memory is utilized to store at least one data. The controller is coupled to the flash memory to receive at least one read command transmitted from a host, and reads the data stored by the flash memory according to the read command. The controller determines whether or not the length of the read command is greater than a first predetermined value. If the length is greater than the first predetermined value, the controller arranges the read command on a sequential queue. If the length is not greater than the first predetermined value, the controller arranges the read command on a random queue. The controller executes the read command of the random queue at high priority.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: January 7, 2020
    Assignee: Silicon Motion, Inc.
    Inventor: Yu-Chih Lin
  • Patent number: 10514745
    Abstract: Examples include techniques to predict memory bandwidth demand for a storage or memory device. Examples include receiving an access request to remotely access a storage device and gather information to use to predict a memory bandwidth demand for subsequent access requests to the storage device. Adjustments to power supplied to the storage device may be caused based on the predicted memory bandwidth demand. The adjustments may load balance power among a plurality of storage devices remotely accessible through a network fabric. The plurality of storage devices including the storage device.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: December 24, 2019
    Assignee: Intel Corporation
    Inventor: Francesc Guim Bernat
  • Patent number: 10514920
    Abstract: A processor includes a processing core that detects a predetermined program is running on the processor and looks up a prefetch trait associated with the predetermined program running on the processor, wherein the prefetch trait is either exclusive or shared. The processor also includes a hardware data prefetcher that performs hardware prefetches for the predetermined program using the prefetch trait. Alternatively, the processing core loads each of one or more range registers of the processor with a respective address range in response to detecting that the predetermined program is running on the processor. Each of the one or more address ranges has an associated prefetch trait, wherein the prefetch trait is either exclusive or shared. The hardware data prefetcher performs hardware prefetches for the predetermined program using the prefetch traits associated with the address ranges loaded into the range registers.
    Type: Grant
    Filed: February 18, 2015
    Date of Patent: December 24, 2019
    Assignee: VIA TECHNOLOGIES, INC.
    Inventors: Rodney E. Hooker, Albert J. Loper, John Michael Greer
  • Patent number: 10509726
    Abstract: A processor includes an execution unit to execute instructions to load indices from an array of indices, optionally perform scatters, and prefetch (to a specified cache) contents of target locations for future scatters from arbitrary locations in memory. The execution unit includes logic to load, for each target location of a scatter or prefetch operation, an index value to be used in computing the address in memory for the operation. The index value may be retrieved from an array of indices identified for the instruction. The execution unit includes logic to compute the addresses based on the sum of a base address specified for the instruction, the index value retrieved for the location, and a prefetch offset (for prefetch operations), with optional scaling. The execution unit includes logic to retrieve data elements from contiguous locations in a source vector register specified for the instruction to be scattered to the memory.
    Type: Grant
    Filed: December 20, 2015
    Date of Patent: December 17, 2019
    Assignee: Intel Corporation
    Inventors: Indraneil M. Gokhale, Elmoustapha Ould-Ahmed-Vall, Charles R. Yount, Antonio C. Valles
  • Patent number: 10496542
    Abstract: Systems and methods for determining an access pattern in a computing system. Accesses to a file may contain random accesses and sequential accesses. The file may be divided into multiple regions and the accesses to each region are tracked. The access pattern for each region can then be determined independently of the access patterns of other regions of the file.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: December 3, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Yamini Allu, Philip N. Shilane, Grant R. Wallace
  • Patent number: 10489302
    Abstract: An emulated input/output memory management unit (IOMMU) includes a management processor to perform page table translation in software. The emulated IOMMU can also include a hardware input/output translation lookaside buffer (IOTLB) to store translations between virtual addresses and physical memory addresses. When a translation from a virtual address to a physical address is not found in the IOTLB for an I/O request, the translation can be generated by the management processor using page tables from a memory and can be stored in the IOTLB. Some embodiments can be used to emulate interrupt translation service for message based interrupts for an interrupt controller.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: November 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Leah Shalev, Nafea Bshara
  • Patent number: 10482019
    Abstract: Proposed are a storage apparatus and a control method thereof capable of improving the response performance to a read access of various access patterns. When data to be read is not retained in a data buffer memory, upon staging the data to be read, a processor performs sequential learning of respectively observing an access pattern in units of blocks of a predetermined size and an access pattern in units of slots configured from a plurality of the blocks regarding an access pattern of the read access from the host apparatus, and expands a data range to be staged as needed based on a learning result of the sequential learning.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: November 19, 2019
    Assignee: Hitachi, Ltd.
    Inventors: Taku Adachi, Hisaharu Takeuchi
  • Patent number: 10474578
    Abstract: An system for prefetching data for a processor includes a processor core, a memory, a cache memory, and a prefetch circuit. The memory may be configured to store information for use by the processor core. The cache memory may be configured to issue a fetch request for information from the memory for use by the processor core. The prefetch circuit may be configured to issue a prefetch request for information from the memory to store in the cache memory using a predicted address, and to monitor, over a particular time interval, an amount of fetch requests from the cache memory and prefetch requests from the prefetch circuit. The prefetch circuit may also be configured to disable prefetch requests from the memory for a subsequent time interval in response to a determination that the amount satisfies a threshold amount.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: November 12, 2019
    Assignee: Oracle International Corporation
    Inventors: Hyunjin Abraham Lee, Yuan Chou, John Pape
  • Patent number: 10474577
    Abstract: Enabling a prefetch request to be controlled in response to conditions in a receiver of the prefetch request and to conditions in a source of the prefetch request. One or more processors identify, based on a prefetch tag, a prefetch request that is associated with a prefetch instruction that is executed by a remote processor. The one or more processors generate the prefetch request in a remote processor according to a prefetch protocol. The prefetch request includes i) a description of at least one prefetch request operation and ii) a prefetch request information. A local processor, of the one or more processors, receives the prefetch request from the remote processor.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: November 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 10467190
    Abstract: Disclosed herein are methods, systems, and processes to track access patterns of inodes, and to issue read-ahead instructions to pre-fetch inodes into memory. A location of a unit of metadata in a metadata storage area is determined. Another location in the metadata storage area that corresponds to a current metadata read operation is determined. Whether a metadata read-ahead operation can be performed is determined using the location of the unit of metadata and the another location. In response to a determination that the metadata read-ahead operation can be performed, the metadata-ahead operation is issued.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: November 5, 2019
    Assignee: Veritas Technologies LLC
    Inventors: Bhautik Patel, Freddy James, Mitul Kothari, Anindya Banerjee
  • Patent number: 10467152
    Abstract: At a cache manager of a directed acyclic graph-based data analytic platform, from each of a plurality of monitor components on a plurality of worker nodes, statistics are obtained for a plurality of tasks, including which of the tasks have been processed and which are in a task queue. Each of the tasks has at least one associated distributed dataset. Each worker has a distributed dataset cache. A current stage directed acyclic graph is obtained from a directed acyclic graph scheduler component. For a given one of the tasks which has been processed, and for which it is determined that no other ones of the tasks depend on the at least one distributed dataset associated with the given one of the tasks, the distributed dataset is evicted from a corresponding one of the distributed dataset caches.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: November 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Min Li, Yandong Wang, Li Zhang
  • Patent number: 10462187
    Abstract: A network security system monitors, during a time period, data traffic transmitted between devices in a network to identify a plurality of commands transmitted between the devices. The network security system determines, from the plurality of commands, a first set of commands that were transmitted between a first device and a second device in the network. The network security system determines that the first set of commands includes a threshold number of commands from a first predetermined command group of a plurality of predetermined command groups. Each predetermined command group includes a listing of commands. The network security system generates a first policy based on the first predetermined command group.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: October 29, 2019
    Assignee: General Electric Company
    Inventor: Roderick Locke
  • Patent number: 10452395
    Abstract: A query is performed to obtain cache residency and/or other information regarding selected data. The data to be queried is data of a cache line, prefetched or otherwise. The capability includes a Query Cache instruction that obtains cache residency information and/or other information and returns an indication of the requested information.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: October 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Michael K. Gschwind, Christian Jacobi, Anthony Saporito, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 10452400
    Abstract: A processor includes a pipeline and a multi-bank Branch-Target Buffer (BTB). The pipeline is configured to process program instructions including branch instructions. The multi-bank BTB includes a plurality of BTB banks and is configured to store learned Target Addresses (TAs) of one or more of the branch instructions in the plurality of the BTB banks, to receive from the pipeline simultaneous requests to retrieve respective TAs, and to respond to the requests using the plurality of the BTB banks in the same clock cycle.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: October 22, 2019
    Assignee: CENTIPEDE SEMI LTD.
    Inventors: Avishai Tvila, Alberto Mandler