Abstract: An exemplary content engine includes a content gateway configured to analyze and route content requests to a content server. The content server can be a cache server or a mobile content server. The cache server can be configured to receive and store cacheable web content from a controller that is configured to receive the cacheable web content from at least one cacheable web content provider, such as a web server, and route the content to the cache server. The mobile content server can be configured to receive, from the controller, and store the digital media content. The controller can be further configured to receive the digital media content from at least one external content server and route the content to the mobile content server. The content gateway can be further configured to receive non-cacheable web content from at least one non-cacheable web content provider.
Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
Type:
Grant
Filed:
December 8, 2017
Date of Patent:
February 25, 2020
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Timothy C. Bronson, Garrett M. Drapala, Pak-Kin Mak, Vesselina K. Papazova, Hanno Ulrich
Abstract: A data collection system includes one or more input sensing devices and a data collection device. The data collection device includes data collection circuitry that is continuously activated to capture measurement data samples from the one or more input sensing devices and locally store the measurement data samples. The data collection device also includes a digital processor that is coupled to the data collection circuitry and is activated to locally perform a sample analysis of the measurement data samples, wherein the sample analysis is a regular analysis of routine measurement data samples when the measurement data samples are without a triggering event, and wherein the sample analysis is an event analysis when the measurement data samples include a triggering event. A data collection integrated circuit and a measurement data sample collection method are also included.
Abstract: Data stored in a hard disk drive (HDD) is processed to generate cache data to be stored in a random access memory (RAM). If a data access request is received from an application and valid cache data corresponding to the access request is present in the RAM, response data is acquired from the RAM, without accessing the HDD, and the response data is transmitted to the source of the access request. If the valid cache data corresponding to the access request is not present in the RAM, response data is acquired from the HDD and the response data is transmitted to the source of the access request. Consequently, the number of times of access to the HDD is reduced.
Abstract: An apparatus and method are provided for storing source operands for operations. The apparatus comprises execution circuitry for performing operations on data values, and a register file comprising a plurality of registers to store the data values operated on by the execution circuitry. Issue circuitry is also provided that has a pending operations storage identifying pending operations awaiting performance by the execution circuitry and selection circuitry to select pending operations from the pending operation storage to issue to the execution circuitry. The pending operations storage comprises an entry for each pending operation, each entry storing attribute information identifying the operation to be performed, where that attribute information includes a source identifier field for each source operand of the pending operation.
Type:
Grant
Filed:
May 23, 2018
Date of Patent:
February 11, 2020
Assignee:
Arm Limited
Inventors:
Luca Nassi, Cédric Denis Robert Airaud, Rémi Marius Teyssier, Albin Pierrick Tonnerre
Abstract: In one embodiment, a method includes assigning a number of threads for user plane functions to a corresponding number of transmit queues for transmission of packets on a network interface, assigning additional threads exceeding the number of transmit queues to software transmission queues associated with the threads assigned to the transmit queues, identifying a load at each of the threads, dynamically updating assignment of the additional threads to the software transmission queues based on the load at the threads, and transmitting packets from the transmit queues for transmission on a network from a physical interface at a network device. An apparatus and logic are also disclosed herein.
Type:
Grant
Filed:
September 22, 2017
Date of Patent:
February 11, 2020
Assignee:
CISCO TECHNOLOGY, INC.
Inventors:
Prasannakumar Murugesan, Ajeet Pal Singh Gill, David A. Johnson, Ian McDowell Campbell, Ravinandan Arakali
Abstract: Aspects of the instant disclosure relate to methods for facilitating intercloud resource migration. In some embodiments, a method of the subject technology can include steps for instantiating a first intercloud fabric provider platform (ICFPP) at a first cloud datacenter, instantiating a second ICFPP at a second cloud datacenter, and receiving a migration request at the first ICFPP, the migration request including a request to migrate a virtual machine (VM) workload from the first cloud datacenter to the second cloud datacenter. In some aspects, the method may further include steps for initiating, by the first ICFPP, a migration of the VM workload via the second ICFPP in response to the migration request. Systems and machine readable media are also provided.
Type:
Grant
Filed:
January 26, 2017
Date of Patent:
February 4, 2020
Assignee:
CISCO TECHNOLOGY, INC.
Inventors:
David Wei-Shen Chang, Abhijit Patra, Nagaraj A. Bagepalli, Dileep Kumar Devireddy, Murali Anantha
Abstract: A transaction manager for use with memory is described. The transaction manager can include a write data buffer to store outstanding write requests, a read data multiplexer to select between data read from the memory and the write data buffer, a command queue and a priority queue to store requests for the memory, and a transaction table to track outstanding write requests, each write request associated with a state that is Invalid, Modified, or Forwarded.
Abstract: A method for adjusting over provisioning space and a flash device are provided. The flash device includes user storage space for storing user data and over provisioning space for garbage collection within the flash device. The flash device receives an operation instruction, and then performs an operation on user data stored in the user storage space according to the operation instruction. Further, the flash device identifies a changed size of user data after performing the operation. Based on the changed size of data, a target adjustment parameter is identified. Further, the flash device adjusts the capacity of the over provisioning space according to the target adjustment parameter. According to the method, the over provisioning ratio can be dynamically adjusted.
Abstract: Provided are a computer program product, system, and method for processing cache miss rates to determine memory space to add to an active cache to reduce a cache miss rate for the active cache. During caching operations to the active cache, information is gathered on an active cache miss rate based on a rate of access to tracks that are not indicated in the active cache list and a cache demote rate. A determination is made as to whether adding additional memory space to the active cache would result in the active cache miss rate being less than the cache demote rate when the active cache miss rate exceeds the cache demote rate. A message is generated indicating to add the additional memory space when adding the additional memory space would result in the active cache miss rate being less than the cache demote rate.
Type:
Grant
Filed:
June 21, 2017
Date of Patent:
January 21, 2020
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
Abstract: An integrated circuit (IC) package apparatus is disclosed. The IC package includes one or more processing units and a bridge, mounted below the one or more processing unit, including one or more arithmetic logic units (ALUs) to perform atomic operations.
Type:
Grant
Filed:
April 9, 2017
Date of Patent:
January 21, 2020
Assignee:
INTEL CORPORATION
Inventors:
Altug Koker, Farshad Akhbari, Feng Chen, Dukhwan Kim, Narayan Srinivasa, Nadathur Rajagopalan Satish, Liwei Ma, Jeremy Bottleson, Eriko Nurvitadhi, Joydeep Ray, Ping T. Tang, Michael Strickland, Xiaoming Chen, Tatiana Shpeisman, Abhishek R. Appu
Abstract: A method and apparatus receive packets, wherein the packets comprise headers, and the headers comprise session parameter values, route the packets in response to the session parameter values matching an active traffic session entry of the active traffic session entries in an active traffic session cache memory, match the session parameter values against historical session entries in an historical session cache memory in response to the session parameter values not matching any active traffic session entry of the active traffic session entries in the active traffic session cache memory, wherein the historical session entries for traffic sessions in the historical session cache memory persist after the traffic sessions are no longer active, and, in response to the session parameter values not matching any historical session entry of the historical session entries in the historical session cache memory, performing a packet security check on the packets.
Type:
Grant
Filed:
December 15, 2015
Date of Patent:
January 14, 2020
Assignee:
NXP USA, Inc.
Inventors:
Subhashini A. Venkataramanan, Srinivasa R. Addepalli
Abstract: Embodiments relate to efficiently replicating data from a source storage space to a target storage space. The storage spaces share a common namespace of paths where content units are stored. A shallow cache is maintained for the target storage space. Each entry in the cache includes a hash of a content unit in the target storage space and associated hierarchy paths in the target storage space where the corresponding content unit is stored. When a set of content units in the source storage space is to be replicated at the target storage space, any content unit with a hash in the cache is replicated from one of the associated paths in the cache, thus avoiding having to replicate content from the source storage space.
Abstract: Methods and systems configured to increment one or more counters, including read command total, write command total, total blocks written and read, and low read or write queue depth, when a read or write command is received. When a request for a total device busy time is received, a total device busy time is determined and provided using one or more of the counters and one or more corresponding timing factors.
Type:
Grant
Filed:
June 1, 2018
Date of Patent:
January 14, 2020
Assignee:
MICRON TECHNOLOGY, INC.
Inventors:
Steven Gaskill, Kihoon Park, Yin Feng Zhang
Abstract: A system, a method, and computer program product for performing buffering operations. A data update is received at a buffering location. The buffering location includes a first buffer portion and a second buffer portion. The data update includes an address tag. The buffering location is communicatively coupled to a memory location configured to receive the data update. A target address of the data update in the memory location is determined using the first buffer portion and compared to the address tag. The data update is applied using the first buffer portion to update data in the first buffer portion upon determination that the target address matches the address tag. The target address of the data update is pre-fetched from the memory location upon determination that the target address does not match the address tag. The first and second buffer portions buffer the data update using the pre-fetched target address.
Abstract: Provided is an analysis apparatus including a first storage device configured to store data, and a processing circuitry that is configured to control the own apparatus to function as: a dispatcher that is communicably connected to an analysis target device that performs operational processing by use of a processor and a memory unit, and generates collection target data for reproducing at least part of a state of the operational processing in the analysis target device, in accordance with data being transmitted and received between the processor and the memory unit; a data mapper that assigns, to one or more areas included in the collection target data, tag information for identifying the area; and a data writer that saves the one or more areas into the first storage device in accordance with a first policy defining a procedure of saving the collection target data into the first storage device.
Abstract: A distributed storage system may store data object instances in persistent storage and may cache keymap information for those data object instances. The system may cache a latest symbolic key entry for some user keys of the data object instances. When a request is made for the latest version of stored data object instances having a specified user key, the latest version may be determined dependent on whether a latest symbolic key entry exists for the specified user key, and keymap information for the latest version may be returned. When storing keymap information, a flag may be set to indicate that a corresponding latest symbolic key entry should be updated. The system may delete a latest symbolic key entry for a particular user key from the cache in response to determining that no other requests involving the keymap information for data object instances having the particular user key are pending.
Type:
Grant
Filed:
January 22, 2018
Date of Patent:
January 7, 2020
Assignee:
Amazon Technologies, Inc.
Inventors:
Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
Abstract: In an example, there is disclosed a computing apparatus, including a user notification interface; a context interface; and one or more logic elements forming a contextual privacy engine operable to: receive a notification; receive a context via the context interface; apply the context to the notification via a notification rule; and take an action via the user notification interface based at least in part on the applying. The contextual privacy engine may also be operable to mathematically incorporate user feedback into the notification rule. There is also described a method of providing a contextual privacy engine, and one or more computer-readable storage mediums having stored thereon executable instructions for providing a contextual privacy engine.
Type:
Grant
Filed:
August 27, 2015
Date of Patent:
January 7, 2020
Assignee:
McAfee, LLC
Inventors:
Raj Vardhan, Igor Tatourian, Dattatraya Kulkarni, Jeremy Bennett, Samrat Chitta, Reji Gopalakrishnan, Muralitharan Chithanathan
Abstract: A storage device includes a first memory which stores data including activation data necessary to activate a host device, a second memory, and a controller which performs writing and reading operation of data stored in the first memory based on a request from the host device; acquires address information including an address and data amount of data in the first memory, for which a read request is previously issued from the host device at activation of the host device; at activation of the storage device, reads data including at least the activation data from the first memory based on the address information and store the data in the second memory; and in response to a read request issued from the host device, transmits the data stored in the second memory to the host device.
Abstract: Methods and systems for self-invalidating cachelines in a computer system having a plurality of cores are described. A first one of the plurality of cores, requests to load a memory block from a cache memory local to the first one of the plurality of cores, which request results in a cache miss. This results in checking a read-after-write detection structure to determine if a race condition exists for the memory block. If a race condition exists for the memory block, program order is enforced by the first one of the plurality of cores at least between any older loads and any younger loads with respect to the load that detects the prior store in the first one of the plurality of cores that issued the load of the memory block and causing one or more cache lines in the local cache memory to be self-invalidated.
Abstract: A dynamic premigration protocol is implemented in response to a secondary tier returning to an operational state and an amount of data associated with a premigration queue of a primary tier exceeding a first threshold. The dynamic premigration protocol can comprise at least a temporary premigration throttling level. An original premigration protocol is implemented in response to an amount of data associated with the premigration queue decreasing below the first threshold.
Type:
Grant
Filed:
September 21, 2017
Date of Patent:
January 7, 2020
Assignee:
International Business Machines Corporation
Inventors:
Koichi Masuda, Katja I. Denefleh, Joseph M. Swingler
Abstract: Provided are techniques for handling cache and Non-Volatile Storage (NVS) out of sync writes. At an end of a write for a cache track of a cache node, a cache node uses cache write statistics for the cache track of the cache node and Non-Volatile Storage (NVS) write statistics for a corresponding NVS track of an NVS node to determine that writes to the cache track and to the corresponding NVS track are out of sync. The cache node sets an out of sync indicator in a cache data control block for the cache track. The cache node sends a message to the NVS node to set an out of sync indicator in an NVS data control block for the corresponding NVS track. The cache node sets the cache track as pinned non-retryable due to the write being out of sync and reports possible data loss to error logs.
Type:
Grant
Filed:
September 5, 2017
Date of Patent:
December 31, 2019
Assignee:
International Business Machines Corporation
Inventors:
Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Beth A. Peterson
Abstract: In one implementation, relationship based cache resource naming and evaluation includes a generate engine to generate a name for a resource being added to a cache, including a plurality of resources, based on a plurality of parameters of a query including an input resource from which the resource is derived, a workflow of the operations to perform to the input resource to generate the resource, and a context associated with the query. In addition, the system includes an evaluate engine to evaluate, in response to an event, each of the plurality of resources and the named resource related to the event.
Type:
Grant
Filed:
September 3, 2014
Date of Patent:
December 24, 2019
Assignee:
HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventors:
Eric Deliot, Rycharde J. Hawkes, Luis Miguel Vaquero Gonzalez, Lawrence Wilcock
Abstract: A method and a processing device are provided for sequentially aggregating data to a write log included in a volume of a random-access medium. When data of a received write request is determined to be suitable for sequentially aggregating to a write log, the data may be written to the write log and a remapping tree, for mapping originally intended destinations on the random-access medium to one or more corresponding entries in the write log, may be maintained and updated. At time periods, a checkpoint may be written to the write log. The checkpoint may include information describing entries of the write log. One or more of the checkpoints may be used to recover the write log, at least partially, after a dirty shutdown. Entries of the write log may be drained to respective originally intended destinations upon an occurrence of one of a number of conditions.
Type:
Grant
Filed:
August 4, 2016
Date of Patent:
December 17, 2019
Assignee:
MICROSOFT TECHNOLOGY LICENSING, LLC
Inventors:
Shi Cong, Scott Brender, Karan Mehra, Darren G. Moss, William R. Tipton, Surendra Verma
Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
Type:
Grant
Filed:
April 1, 2017
Date of Patent:
December 10, 2019
Assignee:
INTEL CORPORATION
Inventors:
Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.
Type:
Grant
Filed:
May 10, 2019
Date of Patent:
December 10, 2019
Assignee:
Georgetown University
Inventors:
Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
Abstract: The present invention is directed to the storage technical field and discloses an on-chip data partitioning read-write method, the method comprises: a data partitioning step for storing on-chip data in different areas, and storing the on-chip data in an on-chip storage medium and an off-chip storage medium respectively, based on a data partitioning strategy; a pre-operation step for performing an operational processing of an on-chip address index of the on-chip storage data in advance when implementing data splicing; and a data splicing step, for splicing the on-chip storage data and the off-chip input data to obtain a representation of the original data based on a data splicing strategy. Also provided are a corresponding on-chip data partitioning read-write system and device. Thus, read and write of repeated data can be efficiently realized, reducing memory access bandwidth requirements while providing good flexibility, thus reducing on-chip storage overhead.
Type:
Grant
Filed:
August 9, 2016
Date of Patent:
December 3, 2019
Assignee:
INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
Inventors:
Tianshi Chen, Zidong Du, Qi Guo, Yunji Chen
Abstract: Embodiments of the present disclosure disclose a method and a device for webpage preloading. The method includes: conducting webpage preloading according to a current preloading policy, in which the preloading policy includes: a preloading time range, a preloading region, a preloading page depth, and an available caching space for preloading; counting historical data within a pre-set time period, in which the historical data includes: information about an accessed webpage, information about a preload webpage, and state information of a local cache; and updating the preloading policy based on the historical data. In the present disclosure, by way of counting the preloading historical data within a pre-set time period, and based on the changes in the historical data, the preloading policy is automatically updated, so that the preloading policy can adapt to network and user access conditions in real time, thereby improving the hit accuracy of webpage preloading.
Abstract: An electronic apparatus and an operating method thereof. The electronic apparatus includes a memory which stores one or more instructions and a processor which executes one or more instructions stored in the memory. The processor executes the instructions to obtain one or more contents to be pre-fetched, to obtain one or more resources available in the electronic apparatus, to determine a priority of the one or more resources, and to allocate the one or more of the obtained resources, based on the obtained priority, forming a pipeline in which the obtained one or more contents are processed.
Type:
Grant
Filed:
March 23, 2018
Date of Patent:
November 19, 2019
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Kil-soo Choi, Se-hyun Kim, Seung-bok Kim, Jae-im Park, Da-hee Jeong
Abstract: Methods, systems, and programs are presented for replicating data across scale-out storage systems. One method includes replicating, from an upstream to a downstream system, a volume snapshot having one or more bins. Locations for the bins of the snapshot are identified, the location for each bin including the upstream array storing the bin and the downstream array storing a replicated version of the bin. Each bin is validated by comparing an upstream bin checksum of the bin with a downstream bin checksum of the replicated version of the bin. When the checksums are different, a plurality of chunks are defined in the bin, and for each chunk in the bin an upstream chunk checksum calculated by the upstream array is compared with a downstream chunk checksum calculated by the downstream array. The chunk is sent from the upstream to the downstream array when the chunk checksums are different.
Type:
Grant
Filed:
November 24, 2015
Date of Patent:
November 5, 2019
Assignee:
Hewlett Packard Enterprise Development LP
Inventors:
Nimesh Bhagat, Tomasz Barszczak, Gurunatha Karaje
Abstract: File measurements are computed and stored in persistent memory of a deduplicated storage system as files are written or on demand, where the file measurements are used to estimate storage requirements for storing a subset of files. The file measurements are accumulated into an initial measurement at a first point in time and a final measurement at a second point in time to obtain an estimate of any change in a quantity of unique segments required to store the subset of files in the deduplicated storage system between the first and second points in time. Future storage requirements can be estimated based on a computed rate of change in the amount of storage required to store the subset of files between the first and second points in time.
Abstract: Techniques and mechanisms described herein facilitate the transmission of a data stream from a client device to a networked storage system. According to various embodiments, a fingerprint for a data chunk may be identified by applying a hash function to the data chunk via a processor. The data chunk may be determined by parsing a data stream at the client device. A determination may be made as to whether the data chunk is stored in a chunk file repository at the client device. A block map update request message including information for updating a block map may be transmitted to a networked storage system via a network. The block map may identify a designated memory location at which the chunk is stored at the networked storage system.
Type:
Grant
Filed:
August 6, 2014
Date of Patent:
October 29, 2019
Assignee:
QUEST SOFTWARE INC.
Inventors:
Tarun K. Tripathy, Brian R. Smith, Abhijit S. Dinkar
Abstract: Systems, apparatuses and methods of adaptively controlling a cache operating voltage are provided that comprise receiving indications of a plurality of cache usage amounts. Each cache usage amount corresponds to an amount of data to be accessed in a cache by one of a plurality of portions of a data processing application. The plurality of cache usage amounts are determining based on the received indications of the plurality of cache usage amounts. A voltage level applied to the cache is adaptively controlled based on one or more of the plurality of determined cache usage amounts. Memory access to the cache is controlled to be directed to a non-failing portion of the cache at the applied voltage level.
Type:
Grant
Filed:
April 8, 2016
Date of Patent:
October 22, 2019
Assignees:
ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
Inventors:
Ihab Amer, Khaled Mammou, Haibo Liu, Edward Harold, Fabio Gulino, Samuel Naffziger, Gabor Sines, Lawrence A. Bair, Andy Sung, Lei Zhang
Abstract: A storage medium includes: converting, when a first instruction in an innermost loop of loop nests of a source code is executed, the source code such that a second instruction is executed which writes data in cache lines written by execution of the first instruction to be executed a count later in the innermost loop; calculating, when a first conversion code including the second instruction based on a first current iteration count is executed, a first value indicating a first rate; calculating, when a second conversion code including the first instruction based on a second current iteration count is executed, a second value indicating a second rate; comparing the first and second values; and converting a loop nest having the first value larger than the second value and a loop nest having the second value larger than the first value into the first and second conversion codes, respectively.
Abstract: Provided are an apparatus, system and method to determine whether to use a low or high read voltage. First level indications of write addresses, for locations in the non-volatile memory to which write requests have been directed, are included in a first level data structure. For a write address of the write addresses having a first level indication in the first level data structure, the first level indication of the write address is removed from the first level data structure and a second level indication for the write address is added to a second level data structure to free space in the first level data structure to indicate a further write address. A first voltage level is used to read data from read addresses mapping to one of the first and second level indications in the first and the second level data structures, respectively.
Type:
Grant
Filed:
December 30, 2016
Date of Patent:
October 22, 2019
Assignee:
INTEL CORPORATION
Inventors:
Zhe Wang, Zeshan A. Chishti, Muthukumar P. Swaminathan, Alaa R. Alameldeen, Kunal A. Khochare, Jason A. Gayman
Abstract: The disclosed computer-implemented method for distributing cache space may include (i) identifying workloads that make input/output requests to a storage system that comprises a cache that stores a copy of data recently written to the storage system, (ii) calculating a proportion of the cache that is occupied by data written to the cache by a workload, (iii) determining that the proportion of the cache that is occupied by the data written to the cache by the workload is disproportionate, and (iv) limiting the volume of input/output requests from workload that will be accepted by the storage system in response to determining that the proportion of the cache that is occupied by the data written to the cache by the workload is disproportionate. Various other methods, systems, and computer-readable media are also disclosed.
Abstract: The present disclosure includes apparatuses and methods for an operating system cache in a solid state device (SSD). An example apparatus includes the SSD, which includes an In-SSD volatile memory, a non-volatile memory, and an interconnect that couples the non-volatile memory to the In-SSD volatile memory. The SSD also includes a controller configured to receive a request for performance of an operation and to direct that a result of the performance of the operation is accessible in the In-SSD volatile memory as an In-SSD main memory operating system cache.
Abstract: Example implementations relate to determining lengths of acknowledgment delays for input/output (I/O) commands. In example implementations, a length of an acknowledgment delay for a respective I/O command may be based on cache availability, and activity level of a drive at which the respective I/O command is directed, after the respective I/O command has been executed. Acknowledgments for respective I/O commands may be transmitted after respective periods of time equal to respective lengths of acknowledgment delays have elapsed.
Type:
Grant
Filed:
April 30, 2014
Date of Patent:
October 22, 2019
Assignee:
Hewlett Packard Enterprise Development LP
Inventors:
Siamak Nazari, Srinivasa D Murthy, Jin Wang, Ming Ma
Abstract: A technique for atomically moving a linked data element may include providing an atomic-move wrapper around the data element, along with an existence header whose status may be permanent, outgoing or incoming to indicate whether the data element is not in transition, or if in transition is either outgoing or incoming. The existence header may reference an existence group having a state field that changes state using a single store operation. A first state may indicate that the data element exists if its existence header is outgoing, and does not exist if its existence header is incoming. A second state may indicate that the data element exists if its existence header is incoming, and does not exist if its existence header is outgoing. Following the state change, the existence group and any atomic-move wrapper containing an outgoing existence header and data element may be freed following an RCU grace period.
Type:
Grant
Filed:
December 14, 2016
Date of Patent:
October 22, 2019
Assignee:
International Business Machines Corporation
Abstract: Technologies for dynamically allocating tiers of disaggregated memory resources include a compute device. The compute device is to obtain target performance data, determine, as a function of target performance data, memory tier allocation data indicative of an allocation of disaggregated memory sleds to tiers of performance, in which one memory sled of one tier is to act as a cache for another memory sled of a subsequent tier, send the memory tier allocation data and the target performance data to the corresponding memory sleds through a network, receive performance notification data from one of the memory sleds in the tiers, and determine, in response to receipt of the performance notification data, an adjustment to the memory tier allocation data.
Type:
Grant
Filed:
June 30, 2017
Date of Patent:
October 15, 2019
Assignee:
Intel Corporation
Inventors:
Ginger H. Gilsdorf, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Mark A. Schmisseur
Abstract: A method, article of manufacture, and apparatus for providing a site cache in a distributed file system is discussed. Data objects may be read from a site cache rather than an authoritative object store. This provides performance benefits when a client reading the data has a better connection to the site cache than to the authoritative object store.
Abstract: A technique for operating a data processing system includes transitioning, by a cache, to a highest point of coherency (HPC) for a cache line in a required state without receiving data for one or more segments of the cache line that are needed. The cache issues a command to a lowest point of coherency (LPC) that requests data for the one or more segments of the cache line that were not received and are needed. The cache receives the data for the one or more segments of the cache line from the LPC that were not previously received and were needed.
Type:
Grant
Filed:
August 2, 2017
Date of Patent:
October 8, 2019
Assignee:
International Business Machines Corporation
Inventors:
Guy L. Guthrie, Michael S. Siegel, William J. Starke, Jeffrey A. Stuecheli
Abstract: A cache controller with a pattern recognition mechanism can identify patterns in cache lines. Instead of transmitting the entire data of the cache line to a destination device, the cache controller can generate a meta signal to represent the identified bit pattern. The cache controller transmits the meta signal to the destination in place of at least part of the cache line.
Type:
Grant
Filed:
December 22, 2016
Date of Patent:
October 1, 2019
Assignee:
Intel Corporation
Inventors:
Saher Abu Rahme, Christopher E. Cox, Joydeep Ray
Abstract: A memory device includes a memory cell region including a plurality of memory cells; a memory cell controller configured to control read and write operation for the memory cell region; one or more NDP engines configured to perform a near data processing (NDP) operation for the memory cell region; a command buffer configured to store an NDP command transmitted from a host; and an engine scheduler configured to schedule the NDP operation for the one or more NDP engines according to the NDP command.
Type:
Grant
Filed:
July 20, 2017
Date of Patent:
October 1, 2019
Assignees:
SK hynix Inc., KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
Inventors:
Byungchul Hong, John Dongjun Kim, Jungho Ahn, Yongkee Kwon, Hongsik Kim
Abstract: An electronic device includes a semiconductor memory. The semiconductor memory may include: a memory circuit comprising a plurality of memory cells; a read circuit configured to generate a first read data signal by reading data from a read target memory cell according to a first read control signal, the read target memory cell being among the plurality of memory cells; and a control circuit configured to control the read circuit to reread the data from the read target memory cell by generating a second read control signal, the second read control signal being based on a data value of the first read data signal.
Abstract: An apparatus comprises a plurality of memory units organized as a hierarchical memory system, wherein each of at least some of the memory units is associated with a processor element; predictor circuitry to perform a prediction process to determine a predicted redundancy period of result data of a data processing operation to be performed, indicating a predicted point when said result data will be next accessed; and an operation controller to cause a selected processor element to perform said data processing operation, wherein said selected processor element is selected based on said predicted redundancy period.
Type:
Grant
Filed:
October 4, 2017
Date of Patent:
September 24, 2019
Assignee:
ARM Limited
Inventors:
Prakash S. Ramrakhyani, Jonathan Curtis Beard
Abstract: A compressed motion compensated video sequence is decoded using reference pictures (R) and motion vectors for deriving intermediate pictures (I,B) from reference pictures. The maximum vertical extent of the motion vector corresponds to a number of lines in the image data. A picture derived from the reference picture and motion vectors is decoded once the vertical extent of the reference picture received exceeds the maximum vertical extent of a motion vector from a starting position. Further set(s) of motion vectors for deriving further picture(s) can be received and for each picture to be derived, the image data is decoded using a respective further set of motion vectors after an area of a respective reference picture has been decoded to a maximum vertical extent of a motion vector from a starting position.
Abstract: A technique relates to enabling a multiprocessor computer system to make a non-coherent request for a cache line. A first processor core sends a non-coherent fetch to a cache. In response to a second processor core having exclusive ownership of the cache line in the cache, the first processor core receives a stale copy of the cache line in the cache based on the non-coherent fetch. The non-coherent fetch is configured to obtain the stale copy for a predefined use. Cache coherency is maintained for the cache, such that the second processor core continues to have exclusive ownership of the cache line while the first processor core receives the stale copy of the cache line.
Type:
Grant
Filed:
November 8, 2017
Date of Patent:
September 17, 2019
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Jane H. Bartik, Nicholas C. Matsakis, Chung-Lung K. Shum, Craig R. Walters
Abstract: A first request is received to access a first set of data in a first cache. A likelihood that a second request to a second cache for the first set of data will be canceled is determined. Access to the first set of data is completed based on the determining the likelihood that the second request to the second cache for the first set of data will be canceled.
Type:
Grant
Filed:
July 13, 2017
Date of Patent:
September 17, 2019
Assignee:
International Business Machines Corporation
Inventors:
Willm Hinrichs, Markus Kaltenbach, Eyal Naor, Martin Recktenwald