Caching Patents (Class 711/118)
  • Patent number: 10721117
    Abstract: A resolution resiliency application performs robust domain name system (DNS) resolution. In operation, the resolution resiliency application determines that an authoritative name server that is responsible for a domain name specified in a DNS query is unavailable. In response to determining that the authoritative name server is unavailable, the resolution resiliency application performs operation(s) that modify one or more DNS records stored in a cache based on one or more resiliency policies associated with the authoritative name server. The resolution resiliency application then generates a DNS response to the DNS query based on a DNS record stored in the modified cache. Notably, the disclosed techniques increase the likelihood of providing clients with DNS responses that accurately provide requested information.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: July 21, 2020
    Assignee: VERISIGN, INC.
    Inventors: Burton S. Kaliski, Jr., Shumon Huque, Eric Osterweil, Frank Scalzo, Duane Wessels, Glen Wiley
  • Patent number: 10713161
    Abstract: According to one embodiment, a memory system includes a nonvolatile memory, and a controller electrically connected to the nonvolatile memory. The controller receives, from a host, a write command including a logical block address. The controller obtains a total amount of data written to the nonvolatile memory by the host during a time ranging from a last write to the logical block address to a current write to the logical block address, or time data associated with a time elapsing from the last write to the logical block address to the current write to the logical block address. The controller notifies the host of the total amount of data or the time data as a response to the received write command.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: July 14, 2020
    Assignee: Toshiba Memory Corporation
    Inventor: Shinichi Kanno
  • Patent number: 10713168
    Abstract: Disclosed herein is a method for operating access to a cache memory via an effective address comprising a tag field and a cache line index field. The method comprises: splitting the tag field into a first group of bits and a second group of bits. The line index bits and the first group of bits are searched in the set directory. A set identifier is generated indicating the set containing the respective cache line of the effective address. The set identifier, the line index bits and the second group of bits are searched in the validation directory. In response to determining the presence of the cache line in the set based on the second searching, a hit signal is generated.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: July 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Christian Jacobi, Ulrich Mayer, Martin Recktenwald, Anthony Saporito, Aaron Tsai
  • Patent number: 10713211
    Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes pre-registering, by a processing device at a client device, multiple input/output (IO) buffers at the client device with a remote direct memory access (RDMA) interface at the client device. The client device accesses multiple server devices of a distributed file system using the RDMA interface. The method further includes receiving a request to access a file in the distributed file system from an application at the client device. The method further includes designating a first IO buffer among the IO buffers as a cache for data from the file. The method further includes receiving the data for the file in the first IO buffer from the distributed file system using the RDMA interface.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: July 14, 2020
    Assignee: Red Hat, Inc.
    Inventors: Mohammed Rafi Kavungal Chundattu Parambil, Raghavendra Talur
  • Patent number: 10713169
    Abstract: In response to receipt by a first coherency domain of a memory access request originating from a master in a second coherency domain and excluding from its scope a third coherency domain, coherence participants in the first coherency domain provide partial responses, and one of the coherence participants speculatively provides, to the master, data from a target memory block. The data includes a memory domain indicator indicating whether the memory block is cached, if at all, only within the first coherency domain. Based on the partial responses a combined response is generated representing a systemwide coherence response to the memory access request. In response to the combined response indicating success and the memory domain indicator indicating that a valid copy of the memory block may be cached outside the first coherence domain, the master discards the speculatively provided data and reissues the memory access request with a larger broadcast scope.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: July 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Eric E. Retter, Michael S. Siegel, Jeffrey A. Stuecheli, Derek E. Williams
  • Patent number: 10713170
    Abstract: A high performance, low power, and cost effective multiple channel cache-system memory system is disclosed.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: July 14, 2020
    Assignee: TSVLINK CORP.
    Inventor: Sheau-Jiung Lee
  • Patent number: 10706167
    Abstract: A computer-implemented method for enforcing privacy in cloud security may include (i) identifying, by a computing device, a set of files in a backup process for a cloud service, (ii) determining, by the computing device, that at least one file in the set of files is a private file, (iii) modifying, by the computing device encrypting the private file, the set of files in the backup process, (iv) completing the backup process for the cloud service with the modified set of files, and (v) enforcing a security policy of the cloud service based on a scan of file hashes. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: July 7, 2020
    Assignee: NortonLifeLock Inc.
    Inventors: Ilya Sokolov, Lei Gu, Jason Holler, Tim van der Horst
  • Patent number: 10705965
    Abstract: During a restart process in which metadata is loaded from at least one of a plurality of storage devices into a cache, a storage controller is configured to generate an IO thread in response to the receipt of an IO request, identify at least one metadata page of the metadata that is used to fulfill the IO request, and generate a loading thread in association with the received IO thread that is configured to cause the storage controller to perform prioritized loading of the identified at least one page of the metadata into the cache. The loading thread is detachable from the IO thread such that, in response to an expiration of the IO thread, the loading thread continues to cause the storage controller to perform the prioritized loading until the loading of the at least one page of the metadata into the cache is complete.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: July 7, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Dror Zalstein, Dafna Levi-Yadgar
  • Patent number: 10698829
    Abstract: A request is received to access at least one data unit of a larger data object by an entity within a local host, which is then queried to determine if the requested data unit is present. If the requested data unit is present in the local cache, it is fetched from the local cache. If the requested data unit is not present in the local cache, however, a respective cache within at least one target host, which is different from the local host, is queried to determine if the requested data unit is present remotely and, if so, the data unit is fetched from there instead. If the requested data unit is not present in the local cache or the cache of the target host, the data unit is fetched from a common data storage pool.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: June 30, 2020
    Assignee: Datrium, Inc.
    Inventors: Mike Chen, Boris Weissman
  • Patent number: 10691729
    Abstract: Systems and methods are provided for providing an object platform for datasets A definition of an object may be obtained. The object may be associated with information stored in one or more datasets. The information may be determined based at least in part on the definition of the object. The object may be stored in a cache such that the information associated with the object is also stored in the cache. One or more interfaces through which requests to perform one or more operations on the object are able to be submitted may be provided.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: June 23, 2020
    Assignee: Palantir Technologies Inc.
    Inventors: Rick Ducott, Aakash Goenka, Bianca Rahill-Marier, Tao Wei, Diogo Bonfim Moraes Morant De Holanda, Jack Grossman, Francis Screene, Subbanarasimhiah Harish, Jim Inoue, Jeremy Kong, Mark Elliot, Myles Scolnick, Quentin Spencer-Harper, Richard Niemi, Ragnar Vorel, Thomas Mcintyre, Thomas Powell, Andy Chen
  • Patent number: 10691613
    Abstract: One embodiment is related to a method for implementing a cache hierarchy, comprising: implementing a plurality of cache layers in the cache hierarchy; and determining a cache algorithm for each cache layer of the plurality of cache layers.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: June 23, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Hao Tong, Philip Shilane
  • Patent number: 10685013
    Abstract: A system and method for global data de-duplication in a cloud storage environment utilizing a plurality of data centers is provided. Each cloud storage gateway appliance divides a data stream into a plurality of data objects and generates a content-based hash value as a key for each data object. An IMMUTABLE PUT operation is utilized to store the data object at the associated key within the cloud.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: June 16, 2020
    Assignee: NetApp Inc.
    Inventors: Kiran Nenmeli Srinivasan, Kishore Kasi Udayashankar, Swetha Krishnan
  • Patent number: 10678644
    Abstract: A method for execution by one or more processing modules of a dispersed storage network (DSN), the method begins by monitoring an encoded data slice access rate to produce an encoded data slice access rate for an associated rebuilding rate of a set of rebuilding rates. The method continues by applying a learning function to the encoded data slice access rate based on a previous encoded data slice access rate associated with the rebuilding rate to produce an updated previous encoded data slice access rate of a set of previous encoded data slice access rates. The method continues by updating a score value associated with the updated previous encoded data slice access rate and the rebuilding rate and selecting a slice access scheme based on the updated score value where a rebuild rate selection will maximize a score value associated with an expected slice access rate.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: June 9, 2020
    Assignee: PURE STORAGE, INC.
    Inventors: Ravi V. Khadiwala, Jason K. Resch
  • Patent number: 10671288
    Abstract: A hierarchical sparse tensor compression method based on artificial intelligence devices, in DRAM, not only saves the storage space of the neuron surface, but also adds a meta-surface to the mask block. When reading data, the mask is first read, then the size of the non-zero data is calculated, and only these non-zero data are read to save DRAM bandwidth. In the cache, only non-zero data is stored, so the required storage space is reduced. When processing data, only non-zero data is used. The method uses a bit mask to determine if the data is zero. There are three levels in the hierarchical compression scheme: tiles, lines, and points, reading bitmasks and non-zero data from DRAM, and saving bandwidth by not reading zero data. When processing data, if their bit mask is zero, the tile data may be easily removed.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: June 2, 2020
    Assignee: Nanjing Iluvatar CoreX Technology Co., Ltd.
    Inventors: Pingping Shao, Jiejun Chen, Yongliu Wang
  • Patent number: 10673756
    Abstract: A method for handling packets in a network by means of forwarding tables includes providing a software switching layer for implementing a software forwarding table; providing a hardware switching layer for implementing at least one of exact matching forwarding tables and wildcard matching forwarding tables; and redistributing, by using a switch management component for controlling the software switching layer and the hardware switching layer, installed forwarding table entries (FTEs) matching a particular flow between the software switching layer and the hardware switching layer based on traffic characteristics of said flow.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: June 2, 2020
    Assignee: NEC CORPORATION
    Inventor: Roberto Bifulco
  • Patent number: 10664401
    Abstract: A method and system for managing a buffer device in a storage system. The method comprising determining a first priority for a first queue included in the buffer device, the first queue comprising at least one data page associated with a first storage device in the storage system; in at least one round, in response to the first priority not satisfying a first predetermined condition, updating the first priority according to a first updating rule, the first updating rule making the updated first priority much closer to the first predetermined condition than the first priority; and in response to the first priority satisfying the first predetermined condition, flushing data in a data page in the first queue to the first storage device.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: May 26, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Jian Gao, Yousheng Liu, Changyu Feng, Geng Han
  • Patent number: 10664279
    Abstract: Instruction prefetching in a computer processor includes, upon a miss in an instruction cache for an instruction cache line: retrieving, for the instruction cache line, a prefetch prediction vector, the prefetch prediction vector representing one or more cache lines of a set of contiguous instruction cache lines following the instruction cache line to prefetch from backing memory; and prefetching, from backing memory into the instruction cache, the instruction cache lines indicated by the prefetch prediction vector.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: May 26, 2020
    Assignee: International Business Machines Corporation
    Inventors: Richard J. Eickemeyer, Sheldon Levenstein, David S. Levitan, Mauricio J. Serrano
  • Patent number: 10664201
    Abstract: Provided are a computer program product, system, and method for considering input/output workload and space usage at a plurality of logical devices to select one of the logical devices to use to store an object. A determination is made of a logical device to store the object based on workload scores for each of the logical devices indicating a level of read and write access of objects in the logical device and space usage of the logical devices. The object is written to the determined logical device.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: May 26, 2020
    Assignee: International Business Machines Corporation
    Inventors: Matthew J. Anglin, Arthur John Colvig, Michael G. Sisco
  • Patent number: 10657054
    Abstract: Renames can be handled in an overlay optimizer to ensure that the rename operations do not fail due to source and target volumes being different. The overlay optimizer can implement a process for linking the two IRP_MJ_CREATE operations that the operating system sends as part of a rename operation. Due to this linking, the overlay optimizer can determine when the second IRP of a rename operation is being processed and can determine the source volume for the operation. When the source volume is the volume of the overlay cache, the overlay optimizer can redirect the second IRP. This will ensure that the rename operation will complete successfully even in cases where the rename operation was initiated without specifying the MOVEFILE_COPY_ALLOWED flag.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: May 19, 2020
    Assignee: Dell Products L.P.
    Inventors: Gokul Thiruchengode Vajravel, Ankit Kumar, Puneet Kaushik
  • Patent number: 10649692
    Abstract: A method of operating a storage device includes receiving a write task from a host device. The method also includes storing the write task in a task queue included in the storage device. A write execution command is received from the host device. The method includes executing the write task in response to the write execution command and performing an internal management operation of the storage device after the write task is stored in the task queue and before the write execution command is received. The response time of the storage device to the write execution command is reduced and performance of the system is enhanced by performing the internal management operation such as the data backup operation during the queuing stage and the ready stage in advance before receiving the write execution command.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: May 12, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyun-Chul Park, Jun-Ho Ahn, Bong-Gwan Seol
  • Patent number: 10649968
    Abstract: A data management system accesses a set of vectors containing binary values generates a corresponding set of sequentially ordered vector blocks. Each vector contains a set of two or more binary values and a numerical vector identifier. The data management system generates a block index based on each corresponding set of sequentially ordered vector blocks. The block index includes a set of vector block arrays, each corresponding to a respective sequential position and including one vector block from each of the sets of sequentially ordered vector blocks that are in the respective sequential position. The vector blocks in each vector block array being are ordered sequentially based on two or more sequential binary values in each respective vector block. For each vector block array, the data management system combines pairs of sequentially ordered vector blocks containing matching sets of two or more binary values into combined vector blocks.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: May 12, 2020
    Assignee: eBay Inc.
    Inventors: Roberto Daniel Konow Krause, Seema Jethani, Mohnish Kodnani, Vishnusaran Ramaswamy, Jonathan Baggott, Harish Kumar Vittal Murthy
  • Patent number: 10650158
    Abstract: An access manager that detects a derivative work and automatically transfers digital access rights associated with an original work to the derivative work executes on a computing device. The access manager detects data to be written to a storage device and generates a new file signature for the data. The access manager compares the new file signature to existing file signatures, where the file signatures include piecewise signatures. When at least one of the piecewise signatures from the new file signature matches one of the piecewise signatures in the existing file signatures, the access manager determines that the new data to be written to the storage device is a derivative work generated from the existing file. The access rights associated with the existing file signature are copied to the new file such that the file access rights associated with the original work are passed on to the derivative work.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: May 12, 2020
    Assignee: SecureCircle, LLC
    Inventors: Jeffrey Capone, Davin Oishi, Artsiom Tsai, Joshua Jones, Ruslan Kazinets
  • Patent number: 10649691
    Abstract: An example of storage system obtains a reference request of a reference request data block that is included in the content and is stored in the medium area. The storage system determines a number of gaps among addresses, in the medium area, of a plurality of data blocks continuous in the content including the reference request data block. The storage system determines, based on the number of gaps, whether or not defrag based on the plurality of data blocks is valid. The storage system writes, when the defrag is determined to be valid, the plurality of data blocks read from the medium area to the memory area, into continuous address areas of the medium area.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: May 12, 2020
    Assignee: HITACHI, LTD.
    Inventors: Mitsuo Hayasaka, Ken Nomura, Keiichi Matsuzawa, Hitoshi Kamei
  • Patent number: 10649897
    Abstract: An access request processing method and apparatus, and a computer device are disclosed. The computer device includes a processor, a dynamic random-access memory (DRAM), and a non-volatile memory (NVM). When receiving a write request, the processor may identify an object cache page according to the write request. The processor obtains the to-be-written data from a buffer according to a buffer pointer in the write request, the to-be-written data including a new data chunk to be written into the object cache page. The processor then inserts a new data node into a log chain of the object cache page, where the NVM stores data representing the log chain of the object cache page. The new data node includes information regarding the new data chunk of the object cache page. The computer device provided in this application can reduce system overheads while protecting data consistency.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: May 12, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Xu, Qun Yu, Yuangang Wang
  • Patent number: 10649903
    Abstract: Modifications to throughput capacity provisioned at a data store for servicing access requests to the data store may be performed according to cache performance metrics. A cache that services access requests to the data store may be monitored to collected and evaluate cache performance metrics. The cache performance metrics may be evaluated with respect to criteria for triggering different throughput modifications. In response to triggering a throughput modification, the throughput capacity for the data store may be modified according to the triggered throughput modification. In some embodiments, the criteria for detecting throughput modifications may be determined and modified based on cache performance metrics.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: May 12, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Muhammad Wasiq, Nima Sharifi Mehr
  • Patent number: 10642496
    Abstract: A storage device may utilize a host memory buffer for re-ordering commands in a submission queue. Out of order commands in a submission queue that uses host virtual buffers that are not the same size may be difficult to search. Accordingly, commands in a submission queue may be correctly ordered in a host memory buffer before being put into the host virtual buffers. When the commands are in order, the search operation for specific data is improved.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: May 5, 2020
    Assignee: SanDisk Technologies Inc.
    Inventors: Shay Benisty, Tal Sharifie
  • Patent number: 10635340
    Abstract: Described is a system that allows for the efficient management of reallocating data between tiers of an automated storage tiering system. In certain configurations, protected data that is stored within the storage system may include a user data portion and a redundant data portion. Accordingly, to conserve space on higher storage tiers, the system may separate user data from the redundant data when reallocating data between tiers. For example, the system may only allocate the user data portion to higher storage tiers thereby conserving the space that would otherwise be taken by the redundant data, which remains, or is demoted to a lower tier. Moreover, the reallocation may occur during scheduled reallocation cycles, and accordingly, the reallocation of the separated protected data may occur without any additional tiering overhead.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: April 28, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Konstantin Buinov, Andrey Fomin, Mikhail Malygin, Vladimir Prikhodko
  • Patent number: 10635443
    Abstract: Instruction-execution processors each execute a first instruction. A control processor converts a second instruction to be emulated into the first instruction, and enters the converted first instruction into the instruction-execution processors. In a parallel-execution period, each instruction-execution processor executes a writing-access instruction or a reading-access instruction to a memory, suspends writing of data into the memory caused by the writing-access instruction, and retains an execution history of the writing-access instruction and the reading-access instruction.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: April 28, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Yuta Toyoda, Shigeki Itou
  • Patent number: 10628322
    Abstract: An operating method of a memory system may include: transmitting, by a descriptor generation unit, cache descriptors to a memory interface unit, and suspending the ordered cache output descriptors by ordering cache output descriptors in a response order; generating, by the memory interface unit, cache commands based on the cache descriptors, and transmitting the cache commands to memory devices; transmitting, by the descriptor generation unit, the cache output descriptors to the memory interface unit according to the response order, when the suspensions of the cache output descriptors are released; and generating, by the memory interface unit, cache output commands based on the cache output descriptors, and transmitting the cache output commands to the memory devices.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: April 21, 2020
    Assignee: SK hynix Inc.
    Inventor: Jeen Park
  • Patent number: 10628318
    Abstract: A system cache and method of operating a system cache are provided. The system cache provides data caching in response to data access requests from plural system components. The system cache has data caching storage with plural entries, each entry storing a block of data items and each block of data items comprising plural sectors of data items. Sector use prediction circuitry is provided which stores a set of sector use pattern entries. In response to a data access request received from a system component specifying one or more data items, a pattern entry is selected and a sector use prediction is generated in dependence on a sector use pattern in the selected pattern entry. Further data items may then be retrieved which are not specified in the data access request but are indicated by the sector use prediction.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: April 21, 2020
    Assignee: ARM LIMITED
    Inventors: Nikos Nikoleris, Andreas Lars Sandberg, Jonas S̆vedas, Stephan Diestelhorst
  • Patent number: 10621273
    Abstract: Server and client methods and systems for improving efficiency, accuracy and speed for inputting data from a variety of networked resources into an electronic form in a continuously streaming manner by multiple operators. More specifically, the present disclosure relates to client/server system and methods for continuous streaming to a series of networked input devices a re-organized forms to allow for multiple operator input to improve speed, accuracy and efficiency of electronic form population.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: April 14, 2020
    Assignee: Massachusetts Mutual Life Insurance Company
    Inventors: Michal Knas, Jiby John
  • Patent number: 10613792
    Abstract: In a data processing system implementing a weak memory model, a lower level cache receives, from a processor core, a plurality of copy-type requests and a plurality of paste-type requests that together indicate a memory move to be performed. The lower level cache also receives, from the processor core, a barrier request that requests enforcement of ordering of memory access requests prior to the barrier request with respect to memory access requests after the barrier request. In response to the barrier request, the lower level cache enforces a barrier indicated by the barrier request with respect to a final paste-type request ending the memory move but not with respect to other copy-type requests and paste-type requests in the memory move.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: April 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bradly G. Frey, Guy L. Guthrie, Cathy May, William J Starke, Derek E. Williams
  • Patent number: 10606756
    Abstract: The present disclosure is directed to systems and methods for preventing or mitigating the effects of a cache-timing based side channel attack, such as a Meltdown type attack. In response to a speculatively executed data access by an unretired or incomplete instruction, rather than transferring data to the CPU cache, the data is instead transferred to data transfer buffer circuitry where the data is held in the form of a record until the instruction requesting the data is successfully completed or retired. Upon retirement of the instruction requesting the data access, the data included in the record may be transferred to the CPU cache. Each record held in the data transfer buffer circuitry may include: a data source identifier; a physical/virtual address of the data; a cache line that includes the data; and an instruction identifier associated with the instruction initiating the data access.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: March 31, 2020
    Assignee: Intel Corporation
    Inventor: Vadim Sukhomlinov
  • Patent number: 10608894
    Abstract: A computer-implemented method includes receiving, at a service, invalidation information relating to at least one resource. Based on the invalidation information, a staleness trigger of the at least one resource is set as a function of an invalidation period. The at least one resource is considered to be not useable based on the function of the invalidation period and the staleness trigger.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: March 31, 2020
    Assignee: LEVEL 3 COMMUNICATIONS, LLC
    Inventors: Christopher Newton, Lewis Robert Varney, Laurence R. Lipstone, William Crowder, Andrew Swart
  • Patent number: 10606750
    Abstract: A computing system comprises one or more cores. Each core comprises a processor. In some implementations, each processor is coupled to a communication network among the cores. In some implementations, a switch in each core includes switching circuitry to forward data received over data paths from other cores to the processor and to switches of other cores, and to forward data received from the processor to switches of other cores.
    Type: Grant
    Filed: April 11, 2017
    Date of Patent: March 31, 2020
    Assignee: Mallanox Technologies Ltd.
    Inventors: Matthew Mattina, Chyi-Chang Miao
  • Patent number: 10606596
    Abstract: A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache preload operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: March 31, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Raymond Michael Zbiciak, Timothy David Anderson, Jonathan (Son) Hung Tran, Kai Chirca, Daniel Wu, Abhijeet Ashok Chachad, David M. Thompson
  • Patent number: 10599574
    Abstract: The present disclosure relates to a memory control device which can distribute and transfer a read request for cache hit data so as to allow a hard disk as well as a cache memory to process the read request, thereby maximizing the throughput of the entire storage device, and an operation method of the memory control device.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: March 24, 2020
    Assignee: SK TELECOM CO., LTD.
    Inventors: Hong Chan Roh, Sang Hyun Park, Jong Chan Lee, Hong Kyu Park, Jae Hyung Kim
  • Patent number: 10599567
    Abstract: A technique relates to enabling a multiprocessor computer system to make a non-coherent request for a cache line. A first processor core sends a non-coherent fetch to a cache. In response to a second processor core having exclusive ownership of the cache line in the cache, the first processor core receives a stale copy of the cache line in the cache based on the non-coherent fetch. The non-coherent fetch is configured to obtain the stale copy for a predefined use. Cache coherency is maintained for the cache, such that the second processor core continues to have exclusive ownership of the cache line while the first processor core receives the stale copy of the cache line.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: March 24, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jane H. Bartik, Nicholas C. Matsakis, Chung-Lung K. Shum, Craig R. Walters
  • Patent number: 10599363
    Abstract: A read method executed by a computing system includes a processor, at least one nonvolatile memory, and at least one cache memory performing a cache function of the at least one nonvolatile memory. The method includes receiving a read request regarding a critical word from the processor. A determination is made whether a cache miss is generated, through a tag determination operation corresponding to the read request. Page data corresponding to the read request is received from the at least one nonvolatile memory in a wraparound scheme when a result of the tag determination operation indicates that the cache miss is generated. The critical word is output to the processor when the critical word of the page data is received.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: March 24, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jinwoo Kim, Jaegeun Park, Youngjin Cho
  • Patent number: 10592112
    Abstract: In some examples, a system may include a computing device in communication with at least one storage device. Initially, the computing device may execute a first type of storage software which stores a first volume in a first storage format on the storage device. The computing device may thereafter execute a second type of storage software which configures a second volume in a second storage format on the storage device. Subsequently, the data of the first volume is migrated to the second volume where the data is stored in the second storage format. In some cases, the second storage software may further define a virtual external device on the storage device and define a logical path from the virtual external device to the first volume. The logical path may be used to migrate the data from the first volume to the second volume.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: March 17, 2020
    Assignee: Hitachi, Ltd.
    Inventors: Yuki Sakashita, Akira Yamamoto
  • Patent number: 10592253
    Abstract: Technologies for pre-memory phase initialization include a computing device having a processor with a cache memory. The computing device may determine whether a temporary memory different from the cache memory of the processor is present for temporary memory access prior to initialization of a main memory of the computing device. In response to determining that temporary memory is present, a portion of the basic input/output instructions may be copied from a non-volatile memory of the computing device to the temporary memory for execution prior to initialization of the main memory. The computing device may also initialize a portion of the cache memory of the processor as Cache as RAM for temporary memory access prior to initialization of the main memory in response to determining that temporary memory is not present. After initialization, the main memory may be configured for subsequent memory access. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: March 17, 2020
    Assignee: Intel Corporation
    Inventors: Giri P. Mudusuru, Rangasai V. Chaganty, Chasel Chiu, Satya P. Yarlagadda, Nivedita Aggarwal, Nuo Zhang
  • Patent number: 10592365
    Abstract: The present invention discloses a method and device for managing a storage system. Specifically, in one embodiment of the present invention, there is proposed a method for managing a storage system, the storage system comprising a buffer device and a plurality of storage devices. The method comprises: receiving an access request with respect to the storage system; determining a storage device among the plurality of storage devices has been failed; and in response to the access request being an access request with respect to the failed storage device, serving the access request with data in the buffer device so as to reduce internal data access in the storage system. In one embodiment of the present invention, there is proposed a device for managing a storage system.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: March 17, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Jian Gao, Yousheng Liu, Changyu Feng, Geng Han
  • Patent number: 10593093
    Abstract: A programmable execution unit (42) of a graphics processor includes a functional unit (50) that is operable to execute instructions (51). The output of the functional unit (50) can both be written to a register file (46) and fed back directly as an input to the functional unit by means of a feedback circuit (52). Correspondingly, an instruction that is to be executed by the functional unit (50) can select as its inputs either the fed-back output (52) from the execution of the previous instruction, or inputs from the registers (46). A register access descriptor (54) between each instruction in a group of instructions (53) specifies the registers whose values will be available on the register ports that the functional unit will read when executing the instruction, and the register address where the result of the execution of the instruction will be written to. The programmable execution unit (42) executes group of instructions (53) that are to be executed atomically.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: March 17, 2020
    Assignee: Arm Limited
    Inventor: Jorn Nystad
  • Patent number: 10585804
    Abstract: Systems and methods for non-blocking implementation of cache flush instructions are disclosed. As a part of a method, data is accessed that is received in a write-back data holding buffer from a cache flushing operation, the data is flagged with a processor identifier and a serialization flag, and responsive to the flagging, the cache is notified that the cache flush is completed. Subsequent to the notifying, access is provided to data then present in the write-back data holding buffer to determine if data then present in the write-back data holding buffer is flagged.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 10585798
    Abstract: Systems and methods for tracking cache line consumption. An example system may comprise: a cache comprising a plurality of cache entries for storing a plurality of cache lines; a processing core, operatively coupled to the cache; and a cache control logic, to: responsive to detecting an update operation with respect to a cache line of the plurality of cache lines, set a cache line access tracking flag associated with the cache line to a first state indicating that the cache line has been produced; and responsive to detecting a read operation with respect to the cache line, set the cache line access tracking flag associated with the cache line to a second state indicating that the cache line has been consumed.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventors: Mark Gray, Tomasz Kantecki
  • Patent number: 10585802
    Abstract: According to some embodiment, a storage system selects one or more directories within a file system as candidates for caching based on directory statistics associated with the directories, where each of the directories includes one or more file objects stored in the storage system. For each of the selected directories, the system determines whether the directory is to be cached based on a directory cache policy. The system caches the directory in a cache memory device in response to determining that the directory is to be cached.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: March 10, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Pranay Singh, Murthy Mamidi, Pengju Shang
  • Patent number: 10579539
    Abstract: A system, method and program product for exploiting in-storage transparent compression. A storage infrastructure is disclosed that includes: a storage device having physical block address (PBA) storage of a defined capacity, a transparent compression system that compresses data written to the PBA storage, and a logical block address-to-physical block address mapping table; and a host having a memory management system that includes: an initialization system that allocates an amount of logical block address (LBA) storage for the host having a capacity greater than the defined capacity of the PBA storage, and that creates a dummy file that consumes LBA storage without consuming any PBA storage; a system that gathers current PBA and LBA usage information.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: March 3, 2020
    Assignee: SCALEFLUX, INC.
    Inventors: Tong Zhang, Yang Liu, Fei Sun, Hao Zhong
  • Patent number: 10579514
    Abstract: Embodiments relate to accessing data in a memory. A method for accessing data in a memory coupled to a processor is provided. The method receives a memory reference instruction for accessing data of a first size at an address in the memory. The method determines an alignment size of the address in the memory. The method accesses the data of the first size in one or more groups of data by accessing each group of data block concurrently. The groups of data have sizes that are multiples of the alignment size.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: March 3, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Timothy J. Slegel
  • Patent number: 10580973
    Abstract: Techniques are disclosed for forming integrated circuit structures including a magnetic tunnel junction (MTJ), such as spin-transfer torque memory (STTM) devices, having magnetic contacts. The techniques include incorporating an additional magnetic layer (e.g., a layer that is similar or identical to that of the magnetic contact layer) such that the additional magnetic layer is coupled antiferromagnetically (or in a substantially antiparallel manner). The additional magnetic layer can help balance the magnetic field of the magnetic contact layer to limit parasitic fringing fields that would otherwise be caused by the magnetic contact layer. The additional magnetic layer may be antiferromagnetically coupled to the magnetic contact layer by, for example, including a nonmagnetic spacer layer between the two magnetic layers, thereby creating a synthetic antiferromagnet (SAF).
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: March 3, 2020
    Assignee: INTEL CORPORATION
    Inventors: Brian S. Doyle, Kaan Oguz, Charles C. Kuo, Mark L. Doczy, Satyarth Suri, David L. Kencke, Robert S. Chau, Roksana Golizadeh Mojarad
  • Patent number: 10579303
    Abstract: Aspects of the present disclosure involve an apparatus including a port interface coupled with a data bus to receive memory transaction commands, and a command queue coupled with the port interface. Additional aspects include methods of operating such an apparatus, and electronic design automation (EDA) devices to generate design files associated with such an apparatus. The command queue includes a plurality of memory entries to store memory transaction commands, a placement logic module to combine a received memory transaction command with a memory transaction command previously stored in one of the plurality of memory entries of the command queue, and a selection logic module to determine an order to transmit memory transaction commands stored in the plurality of memory entries and transmit the stored memory transaction commands according to the determined order to a memory interface.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: March 3, 2020
    Assignee: Candace Design Systems, Inc.
    Inventors: Xiaofei Li, Ying Li, Zhehong Qian, Buying Du