Look-ahead Patents (Class 711/137)
-
Patent number: 11010299Abstract: Systems and methods for pre-fetching data in a memory device are disclosed. The method may include receiving a current read command and determining whether the current read command is a random read command, for example based on a data chunk length identified by the current read command. The method may further include updating a prior read command data structure with the current read command, for random read commands; determining a predicted next read command from the prior read command data structure based on the current read command; and pre-fetching data associated with the predicted next read command. Functionality for prediction of next read commands, or pre-fetch of predicted next read commands, may be turned on or off based on resource availability or prediction success rate measurements.Type: GrantFiled: May 20, 2019Date of Patent: May 18, 2021Assignee: Western Digital Technologies, Inc.Inventors: Ariel Navon, Eran Sharon, Idan Alrod
-
Patent number: 11003596Abstract: The present disclosure provides methods, apparatuses, and systems for implementing and operating a memory module, for example, in a computing device that includes a network interface, which is coupled to a network to enable communication with a client device, and processing circuitry, which is coupled to the network interface via a data bus and programmed to perform operations based on user inputs received from the client device. The memory module includes memory devices, which may be non-volatile memory or volatile memory, and a memory controller coupled between the data bus and the of memory devices. The memory controller may be programmed to determine when the processing circuitry is expected to request a data block and control data storage in the memory devices.Type: GrantFiled: November 25, 2019Date of Patent: May 11, 2021Assignee: Micron Technology, Inc.Inventor: Richard C. Murphy
-
Patent number: 10999395Abstract: Disclosed is a dynamically adaptable stream segment prefetcher for prefetching stream segments from different media streams with different segment name formats and with different positioning of the segment name iterator within the differing segment name formats. In response to receiving a client issued request for a particular segment of a particular media stream, the prefetcher identifies the segment name format and iterator location using a regular expression matching to the client issued request. The prefetcher then generates prefetch requests based on the segment name format and incrementing a current value for the iterator in the segment name of the client issued request.Type: GrantFiled: August 13, 2019Date of Patent: May 4, 2021Assignee: Verizon Digital Media Services Inc.Inventor: Ravikiran Patil
-
Patent number: 10997080Abstract: In a method for address table cache management, a first logical address associated with a first read command may be received. The first logical address may be associated with a first segment of an address mapping table. A second logical address associated with a second read command may then be received. The second logical address may be associated with a second segment of the address mapping table. A correlation metric associating the first segment to the second segment may be increased in response to receiving the first logical address before the second logical address. The first logical address and second logical address may each map to a physical address within the address mapping table, and a mapping table cache may be configured to store two or more segments. The mapping table cache may then be managed based on the correlation metric.Type: GrantFiled: February 11, 2020Date of Patent: May 4, 2021Assignee: Western Digital Technologies, Inc.Inventors: Tomer Tzvi Eliash, Alex Bazarsky, Ariel Navon, Eran Sharon
-
Patent number: 10990403Abstract: An apparatus is described, comprising processing circuitry to speculatively execute an earlier instruction and a later instruction by generating a prediction of an outcome of the earlier instruction and a prediction of an outcome of the later instruction, wherein the prediction of the outcome of the earlier instruction causes a first control flow path to be executed. The apparatus also comprises storage circuitry to store the outcome of the later instruction in response to the later instruction completing, and flush circuitry to generate a flush in response to the prediction of the outcome of the earlier instruction being incorrect. When re-executing the later instruction in a second control flow path following the flush, the processing circuitry is adapted to generate the prediction of the outcome of the later instruction as the outcome stored in the storage circuitry during execution of the first control flow path.Type: GrantFiled: January 27, 2020Date of Patent: April 27, 2021Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Yasuo Ishii, Muhammad Umar Farooq
-
Patent number: 10977062Abstract: A method and apparatus for starting a virtual machine. A specific implementation of the method comprises: acquiring, by a physical machine, a mirror image file required for starting a to-be-started target virtual machine from a distributed block storage system, in response to an entered instruction to start the target virtual machine; and starting the target virtual machine by using the mirror image file. The mirror image file required for starting the virtual machine is stored in the cloud-based distributed block storage system, and a virtual disk is mapped to the physical machine. When the physical machine needs to start the virtual machine, the mirror image file required for starting the virtual machine is acquired from the cloud-based distributed block storage system by reading the virtual disk.Type: GrantFiled: May 25, 2017Date of Patent: April 13, 2021Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventor: Yu Zhang
-
Patent number: 10977041Abstract: A method includes allocating a first entry in a global completion table (GCT) on a processor, responsive to a first instruction group being dispatched, where the first entry corresponds to the first instruction group. A data value applicable to the first instruction group is identified. An offset value applicable to the first instruction group is calculated by subtracting, from the data value, a base value previously written to a second entry of the GCT for a second instruction group. The offset value is written in the first entry of the GCT in lieu of the data value.Type: GrantFiled: February 27, 2019Date of Patent: April 13, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Avery Francois, Richard Joseph Branciforte, Gregory William Alexander
-
Patent number: 10977036Abstract: An apparatus is described. The apparatus includes main memory control logic circuitry comprising prefetch intelligence logic circuitry. The prefetch intelligence circuitry to determine, from a read result of a load instruction, an address for a dependent load that is dependent on the read result and direct a read request for the dependent load to a main memory to fetch the dependent load's data.Type: GrantFiled: September 30, 2016Date of Patent: April 13, 2021Assignee: Intel CorporationInventors: Patrick Lu, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Martin P. Dimitrov
-
Patent number: 10970225Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements.Type: GrantFiled: October 3, 2019Date of Patent: April 6, 2021Assignee: Arm LimitedInventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal
-
Patent number: 10970082Abstract: A startup accelerating method is provided. In response to determining that a login process of an application is started up, pre-fetched data corresponding to a main process of the application is obtained. The pre-fetched data is loaded into a cache, the pre-fetched data being obtained according to a historical startup procedure for the main process. In response to determining that a startup of the login process is completed or determining that the main process is started up, the pre-fetched data is obtained, and a startup procedure of the main process is completed according to the pre-fetched data loaded in the cache. In response to at least portion of total data remaining upon determining that the startup of the login process is completed or determining that the main process is started up, the remaining at least portion of the total data is not pre-fetched, the total data corresponding to pre-fetched information.Type: GrantFiled: May 3, 2019Date of Patent: April 6, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Xue Wei, Qianwen Jin, Wenqiang Wang, Xuyang Li, Kang Gao, Qiru Chen
-
Patent number: 10972574Abstract: A method for stream-processing biomedical data includes receiving, by a file system on a computing device, a first request for access to at least a first portion of a file stored on a remotely located storage device. The method includes receiving, by the file system, a second request for access to at least a second portion of the file. The method includes determining, by a pre-fetching component executing on the computing device, whether the first request and the second request are associated with a sequential read operation. The method includes automatically retrieving, by the pre-fetching component, a third portion of the requested file, before receiving a third request for access to least the third portion of the file, based on a determination that the first request and the second request are associated with the sequential read operation.Type: GrantFiled: April 26, 2017Date of Patent: April 6, 2021Assignee: Seven Bridges Genomics Inc.Inventor: Nemanja Zbiljic
-
Patent number: 10963388Abstract: According to one general aspect, an apparatus may include a multi-tiered cache system that includes at least one upper cache tier relatively closer, hierarchically, to a processor and at least one lower cache tier relatively closer, hierarchically, to a system memory. The apparatus may include a memory interconnect circuit hierarchically between the multi-tiered cache system and the system memory. The apparatus may include a prefetcher circuit coupled with a lower cache tier of the multi-tiered cache system, and configured to issue a speculative prefetch request to the memory interconnect circuit for data to be placed into the lower cache tier. The memory interconnect circuit may be configured to cancel the speculative prefetch request if the data exists in an upper cache tier of the multi-tiered cache system.Type: GrantFiled: August 16, 2019Date of Patent: March 30, 2021Inventors: Vikas Sinha, Teik Tan, Tarun Nakra
-
Patent number: 10963258Abstract: A data processing apparatus is provided that includes lookup circuitry to provide first prediction data in respect of a first block of instructions and second prediction data in respect of a second block of instructions. First processing circuitry provides a first control flow prediction in respect of the first block of instructions using the first prediction data and second processing circuitry provides a second control flow prediction in respect of the second block of instructions using the second prediction data. The first block of instructions and the second block of instructions collectively define a prediction block and the lookup circuitry uses a reference to the prediction block as at least part of an index to both the first prediction data and the second prediction data.Type: GrantFiled: October 9, 2018Date of Patent: March 30, 2021Assignee: Arm LimitedInventors: Yasuo Ishii, Muhammad Umar Farooq, Chris Abernathy
-
Patent number: 10963430Abstract: Shared workspaces with selective content item synchronization. In one embodiment, for example, a personal computing device is configured to send a request to a server of a cloud-based content management system to join a shared workspace. The personal computing device then receives content item metadata about content items associated with the shared workspace. The content item metadata allows a user of the personal computing device to browse a content item-folder hierarchy for the content items even if only some but not all of the content items have been downloaded and stored at the personal computing device.Type: GrantFiled: May 21, 2018Date of Patent: March 30, 2021Assignee: Dropbox, Inc.Inventors: Marcio von Muhlen, George Milton Underwood, IV, Anthony DeVincenzi, Nils Bunger, Colin Dunn, Adam Polselli, Sam Jau, Nathan Borror
-
Patent number: 10949853Abstract: Methods and systems are presented for providing concurrent data retrieval and risk processing while evaluating a risk source of an online service provider. Upon receiving a request to evaluate the risk source, a risk analysis module may initiate one or more risk evaluation sub-processes to evaluate the risk source. Each risk evaluation sub-process may require different data related to the risk source to perform the evaluation. The risk analysis module may simultaneously retrieve the data related to the risk source and perform the one or more risk evaluation sub-processes such that the risk analysis module may complete a risk evaluation sub-process whenever the data required by the risk evaluation sub-process is made available.Type: GrantFiled: November 7, 2018Date of Patent: March 16, 2021Assignee: PayPal, Inc.Inventors: Srinivasan Manoharan, Vinesh Poruthikottu Chirakkil
-
Patent number: 10942854Abstract: Methods, systems, and devices are described for wireless communications. A request for data located in a memory page of a memory array may be received at a device, and a value of a prefetch counter associated with the memory page may be identified. A portion of the memory page that includes the requested data may then be communicated between a memory array and memory bank of the device based on the value of the prefetch counter. For instance, the portion of the memory page may be selected based on the value of the prefetch counter. A second portion of the memory page may be communicated to a buffer of the device, and the value of the prefetch counter may be modified based on a relationship between the first portion of the memory page and the second portion of the memory page.Type: GrantFiled: May 9, 2018Date of Patent: March 9, 2021Assignee: Micron Technology, Inc.Inventors: Robert Nasry Hasbun, Dean D. Gans, Sharookh Daruwalla
-
Patent number: 10936317Abstract: A digital signal processor having at least one streaming address generator, each with dedicated hardware, for generating addresses for writing multi-dimensional streaming data that comprises a plurality of elements. Each at least one streaming address generator is configured to generate a plurality of offsets to address the streaming data, and each of the plurality of offsets corresponds to a respective one of the plurality of elements. The address of each of the plurality of elements is the respective one of the plurality of offsets combined with a base address.Type: GrantFiled: May 24, 2019Date of Patent: March 2, 2021Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Timothy David Anderson, Duc Quang Bui, Joseph Zbiciak, Sahithi Krishna, Soujanya Narnur
-
Patent number: 10929297Abstract: Providing control over processing of a prefetch request in response to conditions in a receiver of the prefetch request and to conditions in a source of the prefetch request. A processor generates a prefetch request and a tag that dictates processing the prefect request. A processor sends the prefetch request and the tag to a second processor. A processor generates a conflict indication based on whether a concurrent processing of the prefetch request and an atomic transaction by the second processor would generate a conflict with a memory access that is associated with the atomic transaction. Based on an analysis of the conflict indication and the tag, a processor processes (i) either the prefetch request or the atomic transaction, or (ii) both the prefetch request and the atomic transaction.Type: GrantFiled: September 26, 2019Date of Patent: February 23, 2021Assignee: International Business Machines CorporationInventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
-
Patent number: 10915244Abstract: Communicating data with a medium is provided. A cache is provided for storing target data of a file identified by an access request from an application of a host. The cache is divided into a read cache, a write cache, and an index cache. Responsive to receiving the access request: the medium is loaded onto a drive using a file system; target data is stored to the write cache and to the read cache; and the index file stored in the index cache is updated to reflect position metadata about the target data stored in the write cache. Responsive to initiating unloading of the medium from the drive: the updated index file stored in the index cache is written to the index partition of the medium; and the target data stored in the write cache is written onto a data partition of the medium without using the file system.Type: GrantFiled: May 16, 2019Date of Patent: February 9, 2021Assignee: International Business Machines CorporationInventors: Takashi Ashida, Tohru Hasegawa, Hiroshi Itagaki, Shinsuke Mitsuma, Terue Watanabe
-
Patent number: 10909045Abstract: A system, apparatus and method for accessing an electronic storage medium, such as a memory location storing a page table, or range table. A virtual address of the electronic storage medium is identified that corresponds to designated portions, such as a range of addresses of the electronic storage medium. The virtual address is translated to a corresponding physical address and one or more commands are identified as being excluded from execution in the designated portions of the electronic storage medium. This may be accomplished by using a routine such as mprotect( ). A fault indication, or decoration, is provided to meta-data associated with the physical address, which is associated with the designated portions of the electronic storage medium when excluded commands are provided to the physical address. A mechanism, such as hardware, is actuated when the fault is generated.Type: GrantFiled: December 20, 2018Date of Patent: February 2, 2021Assignee: Arm LimitedInventors: Jonathan Curtis Beard, Curtis Glenn Dunham, Reiley Jeyapaul, Roxana Rusitoru
-
Patent number: 10901631Abstract: A mechanism is provided in a data processing system comprising at least one processor and at least one memory. The at least one memory comprise instructions which are executed by the at least one processor and configure the processor to implement a read-ahead manager for adaptive read-ahead in log structured storage. The read-ahead manager determines a probability value P representing a probability to read into cache a temporal environment for a front-end read for a given segment in user space in a log structured storage. Responsive to performing a front-end read of a record of the given segment in the log structured storage, the read-ahead manager performs pre-fetch of the temporal environment for the record with probability P.Type: GrantFiled: May 22, 2019Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Avraham Bab-Dinitz, Dorit Hakmon, Asaf Porat-Stoler, Yosef Shatsky
-
Patent number: 10884631Abstract: The method for preloading data of a file containing the following steps of defining a plurality of bins of predetermined sizes in a file, for each input and/or output operation executed on the file, determining the bin involved in the operation, counting the number of input and/or output operations executed in each bin of the file by taking into account only a predetermined number of last operations on the whole file, and when the sum of the operations counted in a bin is greater than a predetermined threshold, loading, in a memory medium, at least one area of the file determined on the basis of this bin.Type: GrantFiled: December 20, 2018Date of Patent: January 5, 2021Inventors: Simon Derr, Gaël Goret, Grégoire Pichon
-
Patent number: 10884637Abstract: Some implementations relate to storage of data in a storage device with a plurality of chips. In some implementations, a computer-implemented method includes identifying a plurality of software applications that are configured to access data from the storage device, determining a data access pattern for each of the plurality of software applications, and based on the data access pattern, assigning a respective subset of the plurality of storage chips to each software application such that each storage chip is configured for access by a specific software application.Type: GrantFiled: March 29, 2019Date of Patent: January 5, 2021Assignee: Elastic Flash Inc.Inventors: Darshan Rawal, Monish Suvarna, Arvind Pruthi
-
Patent number: 10877815Abstract: A mechanism is described for facilitating localized load-balancing for processors in computing devices. A method of embodiments, as described herein, includes facilitating hosting, at a processor of a computing device, a local load-balancing mechanism. The method may further include monitoring balancing of loads at the processor and serving as a local scheduler to maintain de-centralized load-balancing at the processor and between the processor and other one or more processors.Type: GrantFiled: November 26, 2019Date of Patent: December 29, 2020Assignee: INTEL CORPORATIONInventors: Prasoonkumar Surti, David Cowperthwaite, Abhishek R. Appu, Joydeep Ray, Vasanth Ranganathan, Altug Koker, Balaji Vembu
-
Patent number: 10877891Abstract: A data processing system and a method of data processing are provided. The system comprises a first data processing agent, a second data processing agent, and a third data processing agent. Each of the second and third data processing agents have access to one or more caches. A messaging mechanism conveys a message from the first data processing agent to one of the second and third data processing agents specified as a message destination agent in the message. A stashing manager monitors the messaging mechanism and selectively causes data associated with the message to be cached for access by the message destination agent in a cache of the one or more caches in dependence on at least one parameter associated with the message and at least one stashing control parameter defined for a link from the first data processing agent to the message destination agent.Type: GrantFiled: October 2, 2018Date of Patent: December 29, 2020Assignee: ARM LIMITEDInventors: Robert Gwilym Dimond, Eric Biscondi, Paul Stanley Hughes, Mario Torrecillas Rodriguez
-
Patent number: 10877896Abstract: A method for managing a readahead cache in a memory subsystem based on one or more active streams of read commands is described. The method includes receiving a read command that requests data from a memory component and determining whether the read command is part of an active stream of read commands based on a comparison of a set of addresses of the read command with one or more of (1) a command history table, which stores a set of command entries that each correspond to a received read command that has not been associated with an active stream, or (2) an active stream table, which stores a set of stream entries that each corresponds to active streams of read commands. The method further includes modifying a stream entry in the set of stream entries in response to determining that the read command is part of an active stream.Type: GrantFiled: March 7, 2019Date of Patent: December 29, 2020Assignee: MICRON TECHNOLOGY, INC.Inventor: David A. Palmer
-
Method, apparatus, and system for prefetching exclusive cache coherence state for store instructions
Patent number: 10877895Abstract: A method, apparatus, and system for prefetching exclusive cache coherence state for store instructions is disclosed. An apparatus may comprise a cache and a gather buffer coupled to the cache. The gather buffer may be configured to store a plurality of cache lines, each cache line of the plurality of cache lines associated with a store instruction. The gather buffer may be further configured to determine whether a first cache line associated with a first store instruction should be allocated in the cache. If the first cache line associated with the first store instruction is to be allocated in the cache, the gather buffer is configured to issue a pre-write request to acquire exclusive cache coherency state to the first cache line associated with the first store instruction.Type: GrantFiled: August 27, 2018Date of Patent: December 29, 2020Assignee: Qualcomm IncorporatedInventors: Luke Yen, Niket Choudhary, Pritha Ghoshal, Thomas Philip Speier, Brian Michael Stempel, William James McAvoy, Patrick Eibl -
Patent number: 10871902Abstract: Techniques are provided for adaptive look-ahead configuration for data prefetching based on request size and frequency. One method comprises performing the following steps: estimating an earning value for a particular portion based on an average size and frequency of past input/output requests for the particular portion; calculating a quota for the particular portion by normalizing the earning value for the particular portion of the storage system based on earning values of one or more additional portions of the storage system; obtaining a size of a look-ahead window for a new request based on the quota for the particular portion over a prefetch budget assigned to the storage system; and moving a requested data item and one or more additional data items within the look-ahead window from the storage system to the cache memory responsive to the requested data item and/or the additional data items within the look-ahead window not being in the cache memory.Type: GrantFiled: April 29, 2019Date of Patent: December 22, 2020Assignee: EMC IP Holding Company LLCInventors: Jonas F. Dias, Rômulo Teixeira de Abreu Pinho, Adriana Bechara Prado, Vinícius Michel Gottin, Tiago Salviano Calmon, Eduardo Vera Sousa, Owen Martin
-
Patent number: 10866896Abstract: Prefetch apparatus and a method of prefetching are presented. The prefetch apparatus monitors access requests, each having an access request address, and has request tracking storage to store region entries for regions of memory space which each span multiple access request addresses. The request tracking storage keeps access information for access requests received in their corresponding region entries. When a new region access request is received, which belongs to a new region for which there is no region entry, and when the request tracking storage has an adjacent region entry for which the access information shows that at least a predetermined number of the access request addresses have been accessed, a page mode region prefetching process is initiated for all access request addresses in the new region.Type: GrantFiled: September 30, 2015Date of Patent: December 15, 2020Assignee: ARM LimitedInventors: Todd Rafacz, Huzefa Sanjeliwala
-
Patent number: 10860488Abstract: A method is provided for use in a storage system to dynamically disable and enable prefetching, comprising: defining a first plurality of time windows; calculating a first plurality of weights; identifying a first plurality of values of a cache metric; calculating a prefetch score for a first type of data based on the first plurality of weights and the first plurality of caching metric values, the prefetch score being calculated by weighing each of the cache metric values based on a respective one of the first plurality of weights that corresponds to a same time window as the cache metric value; and when the prefetch score fails to meet a threshold, stopping prefetching of the first type of data, while continuing to prefetch a second type of data.Type: GrantFiled: July 31, 2019Date of Patent: December 8, 2020Assignee: EMC IP HOLDING COMPANY LLCInventors: Maher Kachmar, Philippe Armangau
-
Patent number: 10855797Abstract: Embodiments seek to improve web page loading time using server-machine-driven hint generation for based on client-machine-driven feedback. For example, client computers having page renderers are in communication with content servers and hinting processors. The hinting processors can use hinting feedback from multiple page rendering instances to automatically generate hints for optimizing loading and/or rendering of those pages. In some implementations, in response to page requests from the page renderers, content servers can request hints from hinting processors and send those hints to the requesting page renderers for use in improving the page loading experience. In other implementations, in response to page requests from the page renderers, content servers can instruct the requesting page renderers to contact an appropriate hinting processor and to retrieve appropriate hints therefrom for use in improving the page loading experience.Type: GrantFiled: June 3, 2015Date of Patent: December 1, 2020Assignee: VIASAT, Inc.Inventors: Peter Lepeska, David Lerner
-
Patent number: 10852968Abstract: This application sets forth techniques for managing the allocation of memory storage space in a non-volatile memory to improve the operation of a camera application. A camera application monitors an amount of available memory storage space in the non-volatile memory. Responsive to various triggering events, the camera application compares the amount of available memory storage space to a threshold value. When the amount of available memory storage space is less than the threshold value, the camera application transmits a request to a background service to free additional memory storage space within a temporary data store associated with one or more applications installed on the computing device. The temporary data store provides a location for local data to improve the efficiency of the applications, which can be exploited by the camera application to free up memory to avoid a low-memory condition that could prevent the camera application from performing certain operations.Type: GrantFiled: September 20, 2018Date of Patent: December 1, 2020Assignee: Apple Inc.Inventors: Kazuhisa Yanagihara, Benjamin P. Englert, Cameron S. Birse, Susan M. Grady
-
Patent number: 10846226Abstract: Systems and methods for predicting read commands and pre-fetching data when a storage device is receiving random read commands to non-sequentially addressed data locations from a plurality of host sources are disclosed. A storage device having a memory with a plurality of separate prior read command data structures includes a controller having a next read command prediction module that separately predicts a next read command based on a received read command from the one of the plurality of prior read command data structures associated with the host from which the received command originated. The storage device then pre-fetches the data identified in the predicted next read command.Type: GrantFiled: January 28, 2019Date of Patent: November 24, 2020Assignee: Western Digital Technologies, Inc.Inventors: Ariel Navon, Shay Benisty, Alex Bazarsky
-
Patent number: 10845995Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.Type: GrantFiled: June 30, 2017Date of Patent: November 24, 2020Assignee: Intel CorporationInventors: Yipeng Wang, Ren Wang, Sameh Gobriel, Tsung-Yuan Charlie Tai
-
Patent number: 10838864Abstract: A miss in a cache by a thread in a wavefront is detected. The wavefront includes a plurality of threads that are executing a memory access request concurrently on a corresponding plurality of processor cores. A priority is assigned to the thread based on whether the memory access request is addressed to a local memory or a remote memory. The memory access request for the thread is performed based on the priority. In some cases, the cache is selectively bypassed depending on whether the memory access request is addressed to the local or remote memory. A cache block is requested in response to the miss. The cache block is biased towards a least recently used position in response to requesting the cache block from the local memory and towards a most recently used position in response to requesting the cache block from the remote memory.Type: GrantFiled: May 30, 2018Date of Patent: November 17, 2020Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Michael W. Boyer, Onur Kayiran, Yasuko Eckert, Steven Raasch, Muhammad Shoaib Bin Altaf
-
Patent number: 10817426Abstract: A variety of data processing apparatuses are provided in which stride determination circuitry determines a stride value as a difference between a current address and a previously received address. Stride storage circuitry stores an association between stride values determined by the stride determination circuitry and a frequency during a training period. Prefetch circuitry causes a further data value to be proactively retrieved from a further address. The further address is the current address modified by a stride value in the stride storage circuitry having a highest frequency during the training period. The variety of data processing apparatuses are directed towards improving efficiency by variously disregarding certain candidate stride values, considering additional further addresses for prefetching by using multiple stride values, using feedback to adjust the training process and compensating for page table boundaries.Type: GrantFiled: September 24, 2018Date of Patent: October 27, 2020Assignee: Arm LimitedInventors: Krishnendra Nathella, Chris Abernathy, Huzefa Moiz Sanjeliwala, Dam Sunwoo, Balaji Vijayan
-
Patent number: 10802968Abstract: An apparatus for processing memory requests from a functional unit in a computing system is disclosed. The apparatus may include an interface that may be configured to receive a request from the functional. Circuitry may be configured initiate a speculative read access command to a memory in response to a determination that the received request is a request for data from the memory. The circuitry may be further configured to determine, in parallel with the speculative read access, if the speculative read will result in an ordering or coherence violation.Type: GrantFiled: May 6, 2015Date of Patent: October 13, 2020Assignee: Apple Inc.Inventors: Sukalpa Biswas, Harshavardhan Kaushikkar, Munetoshi Fukami, Gurjeet S. Saund, Manu Gulati, Shinye Shiu
-
Patent number: 10789177Abstract: The present disclosure provides methods, apparatuses, and systems for implementing and operating a memory module, for example, in a computing device that includes a network interface, which is coupled to a network to enable communication with a client device, and processing circuitry, which is coupled to the network interface via a data bus and programmed to perform operations based on user inputs received from the client device. The memory module includes memory devices, which may be non-volatile memory or volatile memory, and a memory controller coupled between the data bus and the of memory devices. The memory controller may be programmed to determine when the processing circuitry is expected to request a data block and control data storage in the memory devices.Type: GrantFiled: July 20, 2018Date of Patent: September 29, 2020Assignee: Micron Technology, Inc.Inventor: Richard C. Murphy
-
Patent number: 10778818Abstract: Some demonstrative embodiments include apparatuses, systems and/or methods of controlling data flow over a communication network. For example, an apparatus may include a communication unit to communicate between first and second devices a transfer response, the transfer response in response to a transfer request, the transfer response including a transfer pending status indicating data is pending to be received at the second device, the communication unit is to communicate the transfer response regardless of whether a retry indicator of the transfer request represents a first request for transfer or a retried request.Type: GrantFiled: December 24, 2018Date of Patent: September 15, 2020Assignee: Apple Inc.Inventors: Bahareh Sadeghi, Elad Levy, Oren Kedem, Rafal Wielicki, Marek Dabek
-
Patent number: 10776046Abstract: In one implementation, a method includes receiving code associated with two or more cores of a storage array controller. The method further includes determining, by the storage array controller, that the code is executable and read-only. The method further includes loading, based on the determination, the code into two or more memory pages corresponding to the two or more cores, wherein each of the two or more memory pages is local to each of the two or more cores, respectively.Type: GrantFiled: July 6, 2018Date of Patent: September 15, 2020Assignee: PURE STORAGE, INC.Inventors: Roland Dreier, Peter E. Kirkpatrick, Naveen Neelakantam
-
Patent number: 10776133Abstract: Methods, systems, and devices for preemptively loading code dependencies are described. In some systems, an application server—which may be a software component of a user device—may perform a loading process for an application framework module (e.g., based on receiving an execution request for a corresponding application). To reduce the latency of loading the framework module, the application server may perform one or more preemptive non-framework network requests to retrieve code dependencies for the framework or the application code. These requests may be sent prior to the framework loading process, or in parallel with the framework loading process. The application server may receive the code dependencies in response, and may store these dependencies in a memory cache. When the framework loading process needs these code dependencies, the application server may efficiently access the dependencies locally in the memory cache rather than remotely requesting the dependencies over the network.Type: GrantFiled: January 25, 2018Date of Patent: September 15, 2020Assignee: salesforce.com, inc.Inventors: Robert Ames, Xiaoyi Chen, Hiro Inami
-
Patent number: 10776321Abstract: A scalable de-duplication file system divides the file system into data and metadata stores where each store is built on scale out architectures. Each store is not a single module, but a collection of identical modules that together creates one large store. By scaling metadata store and chunk store, the file system can be scaled linearly without compromising the file system performance. Deduplication logic identifies a chunk location for each stored chunk, and stores, for each identifier, an index of the chunk location associated with the corresponding identifier, such that the stored index for similar chunk ids points to the same chunk location. In this manner, duplicate chunks or blocks of data are referenced merely by pointer or indexes, rather than redundantly duplicating storage for each instantiation or copy of similar data.Type: GrantFiled: December 8, 2014Date of Patent: September 15, 2020Assignee: Trilio Data, Inc.Inventors: Muralidhara R. Balcha, Giridhar Basava, Sanjay Baronia
-
Patent number: 10754773Abstract: A method for dynamically selecting a size of a memory access may be provided. The method comprises accessing blocks having a variable number of consecutive cache lines, maintaining a vector with entries of past utilizations for each block size, and adapting said block size before a next access to the blocks.Type: GrantFiled: October 11, 2017Date of Patent: August 25, 2020Assignee: International Business Machines CorporationInventors: Andreea Anghel, Cedric Lichtenau, Gero Dittmann, Peter Altevogt, Thomas Pflueger
-
Patent number: 10747669Abstract: A method of prefetching attribute data from storage for a graphics processing pipeline comprising a cache, and at least one buffer to which data is prefetched from the storage and from which data is made available for storing in the cache. The method comprises retrieving first attribute data from the storage, the first attribute data representative of a first attribute of a first vertex of a plurality of vertices of at least one graphics primitive, identifying the first vertex, and, in response to the identifying, performing a prefetch process. The prefetch process comprises prefetching second attribute data from the storage, the second attribute data representative of a second attribute of the first vertex, the second attribute being different from the first attribute, and storing the second attribute data in a buffer of the at least one buffer.Type: GrantFiled: August 31, 2018Date of Patent: August 18, 2020Assignee: ARM LimitedInventor: Simon Alex Charles
-
Patent number: 10747475Abstract: Techniques are described for storing a virtual disk in an object store comprising a plurality of physical storage devices housed in a plurality of host computers. A profile is received for creation of the virtual disk wherein the profile specifies storage properties desired for an intended use of the virtual disk. A virtual disk blueprint is generated based on the profile such that that the virtual disk blueprint describes a storage organization for the virtual disk that addresses redundancy or performance requirements corresponding to the profile. A set of the physical storage devices that can store components of the virtual disk in a manner that satisfies the storage organization is then determined.Type: GrantFiled: August 26, 2013Date of Patent: August 18, 2020Assignee: VMware, Inc.Inventors: Christos Karamanolis, Mansi Shah, Nathan Burnett
-
Patent number: 10740018Abstract: A data migration method is disclosed, including: determining a first address of target data that is to be migrated from an internal storage device to an external storage device, the first address is a logical address of the internal storage device; calculating a physical address of the target data in the internal storage device based on the first address; constructing a scatter gather list, where the scatter gather list includes the physical address of the target data in the internal storage device; sending a migration instruction to a direct memory access engine.Type: GrantFiled: May 15, 2018Date of Patent: August 11, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Qingchao Luo
-
Patent number: 10732848Abstract: Systems and methods for predicting read commands and pre-fetching data when a memory device is receiving random read commands to non-sequentially addressed data locations are disclosed. A limited length sequence of prior read commands are generated and compared to a read command history datastore. When a prior pattern of read commands is found corresponding to the search sequence, a next read command that previously followed that search sequence may be used as a predicted next read command and data pre-fetched based on the read command data location information associated with that prior read command that is being used as the predicted read command.Type: GrantFiled: June 29, 2018Date of Patent: August 4, 2020Assignee: Western Digital Technologies, Inc.Inventors: Ariel Navon, Eran Sharon, Idan Alrod
-
Patent number: 10733104Abstract: Techniques are disclosed herein for providing accelerated recovery techniques of a memory device. Such techniques can allow for recovery of the memory device, such as, but not limited to, a flash memory device, following an unexpected reset event.Type: GrantFiled: August 3, 2018Date of Patent: August 4, 2020Assignee: Micron Technology, Inc.Inventor: David Aaron Palmer
-
Patent number: 10725910Abstract: A controller may include a memory suitable for caching write data and map data corresponding to the write data; and a processor suitable for flushing the cached map data in a memory device, and then storing the write data in the memory device, wherein the map data includes location information of the write data.Type: GrantFiled: June 7, 2018Date of Patent: July 28, 2020Assignee: SK hynix Inc.Inventors: Duck-Hoi Koo, Yong-Tae Kim, Soong-Sun Shin, Cheon-Ok Jeong
-
Patent number: 10721212Abstract: A network security system monitors data traffic being transmitted between a first device and a second device in a network to identify a plurality of commands being transmitted between the first device and the second device. The network security system then generates a whitelisting policy based on the plurality of commands being transmitted between the first device and the second device. After generating the whitelisting policy, the network security system receives subsequent data traffic being transmitted between the first device and the second device, and determines, based on the subsequent data traffic, a first command being transmitted between the first device and the second device. In response to determining that the first command is not included in the whitelisting policy, the network security system generates an alert in relation to the first command.Type: GrantFiled: October 31, 2017Date of Patent: July 21, 2020Assignee: General Electric CompanyInventors: Armel Chao, Roderick Locke