Multiple Caches Patents (Class 711/119)
  • Patent number: 10503639
    Abstract: Methods and apparatus for supporting cached volumes at storage gateways are disclosed. A storage gateway appliance is configured to cache at least a portion of a storage object of a remote storage service at local storage devices. In response to a client's write request, directed to at least a portion of a data chunk of the storage object, the appliance stores a data modification indicated in the write request at a storage device, and asynchronously uploads the modification to the storage service. In response to a client's read request, directed to a different portion of the data chunk, the appliance downloads the requested data from the storage service to the storage device, and provides the requested data to the client.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: December 10, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: David Carl Salyers, Pradeep Vincent, Ankur Khetrapal, Kestutis Patiejunas
  • Patent number: 10491865
    Abstract: Camera control and image streaming are described, including at least one camera or an apparatus associated with at least one camera. The camera or apparatus is configured to receive or detect a tag; determine that the tag is associated with the camera, which may be controlled by, managed by, or otherwise associated with the apparatus; determine that the tag is associated with a device or user; and establish a communication with the device or user based on the tag. The communication may include streaming a view through a lens of the camera to the device or user. The tag may allow the device or user to request capturing one or more images or a video using the camera. The tag may automatically trigger capturing of one or more images or a video using the camera.
    Type: Grant
    Filed: July 6, 2015
    Date of Patent: November 26, 2019
    Inventor: Louis Diep
  • Patent number: 10474576
    Abstract: Enabling a prefetch request to be controlled in response to conditions in a receiver of the prefetch request and to conditions in a source of the prefetch request. One or more processors identify, based on a prefetch tag, a prefetch request that is associated with a prefetch instruction that is executed by a remote processor. The one or more processors generate the prefetch request in a remote processor according to a prefetch protocol. The prefetch request includes i) a description of at least one prefetch request operation and ii) a prefetch request information. A local processor, of the one or more processors, receives the prefetch request from the remote processor.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: November 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 10467136
    Abstract: An in-memory cluster computing framework node is described. The node includes storage devices having various priorities. The node also includes a resource monitor to monitor the operation of the storage devices. The node also includes a resource scheduler. When the resource monitor indicates that a storage device is at or approaching saturation, the resource scheduler can migrate data from that storage device to another storage device of lower priority.
    Type: Grant
    Filed: October 13, 2018
    Date of Patent: November 5, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Inseok Stephen Choi, Yang Seok Ki
  • Patent number: 10459889
    Abstract: Technologies are provided for using a multi-user execution plan cache to process database queries. A database query processor can be configured to store execution plans in a multi-user execution plan cache. The query processor can determine whether an execution plan is shareable by multiple database users. If the execution plan is shareable, it can be stored in the cache in association with a sharing user identifier. When a database query is received, the query processor can determine that the query can be performed using the cached execution plan. If the cached execution plan is shareable, the database query can determine whether the cached execution plan is valid for a database user associated with the received database query. If the cached execution plan is valid for the database user, the query processor uses the cached execution plan to perform the query for the associated database user.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: October 29, 2019
    Assignee: SAP SE
    Inventors: Jaeyun Noh, Taesik Yoon, Eun Kyung Chi
  • Patent number: 10445261
    Abstract: An apparatus is described. The apparatus includes a main memory controller having a point-to-point link interface to couple to a point-to-point link. The point-to-point link is to transport system memory traffic between said main memory controller and a main memory. The main memory controller includes at least one of compression logic circuitry to compress write information prior to being transmitted over the link; decompression logic circuitry to decompress read information after being received from the link.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Kirk S. Yap, Daniel F. Cutter, Vinodh Gopal
  • Patent number: 10445237
    Abstract: In a data processing system, a store request is provided having corresponding store data and a corresponding access address, and a memory coherency required attribute corresponding to the access address of the store request is provided. When the store request results in a write-through store due to a cache hit or results in a cache miss, the corresponding access address and store data is stored in a selected entry of the store buffer and a merge allowed indicator is stored in the selected entry which indicates whether or not the selected entry is a candidate for merging. The merge allowed indicator is determined based on the memory coherency required attribute from the MMU and a store buffer coherency enable control bit of the cache. Entries of the store buffer which include an asserted merge allowed indicator and share a memory line in the memory are merged.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: October 15, 2019
    Assignee: NXP USA, Inc.
    Inventor: Jeffrey William Scott
  • Patent number: 10416893
    Abstract: A method of operating a mobile device including an embedded storage having a first capacity includes recognizing, in an application processor of the mobile device, that an external storage is connected to the mobile device, and measuring a performance of the external storage in the application processor of the mobile device. When the performance is equal to or greater than a reference value, the embedded storage and the external storage are constituted into one parallel processing storage, and workload is allocated by memory control module of the application processor to the embedded storage and the external storage based on a first performance of the embedded storage and a second performance of the external storage.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: September 17, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae-Bum Lee, Seung-Yong Shin, Seok-Heon Lee
  • Patent number: 10409729
    Abstract: Control over the overall data cache hit rate is obtained by managing partitioning caching responsibility by address space. Data caches determine whether to cache data by hashing the data address. Each data cache is assigned a range of hash values to serve. By choosing hash value ranges that do not overlap, data duplication can be eliminated if desired, or degrees of overlap can be allowed. Control over hit rate maximization of data caches having best hit response times is obtained by maintaining separate dedicated and undedicated partitions within each cache. The dedicated partition is only used for the assigned range of hash values.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Amnon Naamad, Sean Dolan
  • Patent number: 10410693
    Abstract: A system includes a plurality of processors, each being coupled to each of remaining processors via a cluster of processor interconnects. The cluster of processor interconnects form a data distribution network. The system further includes a plurality of roots coupled to the processors, each root corresponding to one of the processors. Each root comprises a memory controller, one or more branches coupled to the memory controller, and a plurality of memory leaves coupled to the branches, each memory leaf having one or more solid state memory devices. Each of the branches is associated with one or more of the memory leaves and provides access to the associated memory leaves. Each of the processors can access any one of the memory leaves via a corresponding branch of any one of the roots over the data distribution network.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Frederic Roy Carlson, Jr., Mark Himelstein, Bruce Wilford, Dan Arai, David R. Emberson
  • Patent number: 10409614
    Abstract: One embodiment provides for a compute apparatus to perform machine learning operations, the compute apparatus comprising instruction decode logic to decode a single instruction including multiple operands into a single decoded instruction, the multiple operands having differing precisions and a general-purpose graphics compute unit including a first logic unit and a second logic unit, the general-purpose graphics compute unit to execute the single decoded instruction, wherein to execute the single decoded instruction includes to perform a first instruction operation on a first set of operands of the multiple operands at a first precision and a simultaneously perform second instruction operation on a second set of operands of the multiple operands at a second precision.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: September 10, 2019
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Barath Lakshmanan, Tatiana Shpeisman, Joydeep Ray, Ping T. Tang, Michael Strickland, Xiaoming Chen, Anbang Yao, Ben J. Ashbaugh, Linda L. Hurd, Liwei Ma
  • Patent number: 10404823
    Abstract: The described technology is directed towards a cache framework that accesses a tier of ordered caches, in tier order, to satisfy requests for data. The cache framework may be implemented at a front-end service level server, and/or a back end service level server, or both. The cache framework handles read-through and write-through operations, including handling batch requests for multiple data items. The cache framework also facilitates dynamically changing the tier structure, e.g., for adding, removing, replacing and/or reordering caches in the tier, e.g., by re-declaring a data structure such as an array that identifies the tiered cache configuration.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: September 3, 2019
    Assignee: HOME BOX OFFICE, INC.
    Inventors: Sata Busayarat, Jonathan David Lutz, Allen Arthur Gay, Mei Qi
  • Patent number: 10402315
    Abstract: In an embodiment of the invention, a method comprises: requesting an update or modification on a control data in at least one flash block in a storage memory; requesting a cache memory; replicating, from the storage memory to the cache memory, the control data to be updated or to be modified; moving a clean cache link list to a dirty cache link list so that the dirty cache link list is changed to reflect the update or modification on the control data; and moving the dirty cache link list to a for flush link list and writing an updated control data from the for flush link list to a free flash page in the storage memory.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: September 3, 2019
    Assignee: BiTMICRO Networks, Inc.
    Inventors: Marvin Dela Cruz Fenol, Jik-Jik Oyong Abad, Precious Nezaiah Umali Pestano
  • Patent number: 10339455
    Abstract: Described are techniques that determine cumulative skew curves. A first model is determined that generates a predicted destination cumulative skew curve for a specified data set in a destination data storage system having a destination data movement granularity. The predicted destination cumulative skew curve is predicted by the first model in accordance with one or more inputs including a source cumulative skew curve for the specified data set in a source data storage system that uses a source data movement granularity. The source cumulative skew curve for the specified data set is determined based on observed data. First processing is performed using the first model. The first model generates as an output the predicted destination cumulative skew curve. The first processing includes providing the one or more inputs to the first model. Also described is how to generate the first model.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: July 2, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Anat Parush-Tzur, Nir Goldschmidt, Otniel van Handel, Arik Sapojnik, Oshry Ben-Harush, Assaf Natanzon
  • Patent number: 10339059
    Abstract: A flexible, scalable server is described. The server includes plural server nodes each server node including processor cores and switching circuitry configured to couple the processor to a network among the cores with the plurality of cores implementing networking functions within the compute nodes wherein the plurality of cores networking capabilities allow the cores to connect to each other, and to offer a single interface to a network coupled to the server.
    Type: Grant
    Filed: April 7, 2014
    Date of Patent: July 2, 2019
    Assignee: Mellanoz Technologeis, Ltd.
    Inventor: Matthew Mattina
  • Patent number: 10332235
    Abstract: Devices for coordinating or establishing a direct memory access for a network interface card to a graphics processing unit, and for a network interface card to access a graphics processing unit via a direct memory access are disclosed. For example, a central processing unit may request a graphics processing unit to allocate a memory buffer of the graphics processing unit for a direct memory access by a network interface card and receive from the graphics processing unit a first confirmation of an allocation of the memory buffer. The central processing unit may further transmit to the network interface card a first notification of the allocation of the memory buffer of the graphics processing unit, poll the network interface card to determine when a packet is received by the network interface card, and transmit a second notification to the graphics processing unit that the packet is written to the memory buffer.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: June 25, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Brian S. Amento, Kermit Hal Purdy, Minsung Jang
  • Patent number: 10324646
    Abstract: A node controller-based request responding method and node controller, where the method includes receiving, by a first node controller, a first packet, acquiring an information directory, and querying, in the information directory, whether a memory address requested by the first packet is occupied by a second node controller, and when the memory address requested by the first packet is occupied by the second node controller, querying node presence information to determine whether the second node controller exists, and when it is determined that the second node controller does not exist, generating and sending an invalid response packet.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: June 18, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Gongyi Wang, Ben Chen, Yafei Zhao
  • Patent number: 10324520
    Abstract: Technologies for discontinuous execution include a compiler computing device and one or more target computing devices. The compiler computing device converts a computer program into a sequence of atomic transactions and coalesces the transactions to generate additional sequences of transactions. The compiler computing device generates an executable program including two or more sequences of transactions having different granularity. A target computing device selects an active sequence of transactions from the executable program based on the granularity of the sequence and a confidence level. The confidence level is indicative of available energy produced by an energy harvesting unit of the target computing device. The target computing device increases the confidence level in response to successfully committing transactions from the active sequence of transactions into non-volatile memory.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: June 18, 2019
    Assignee: Intel Corporation
    Inventor: Sara S. Bahgsorkhi
  • Patent number: 10318419
    Abstract: Flush avoidance in a load store unit including launching a load instruction targeting an effective address; encountering a set predict hit and an effective-to-real address translator (ERAT) miss for the effective address, wherein the set predict hit comprises a cache address of a cache entry; sending a data valid message for the load instruction to an instruction sequencing unit; and verifying the data valid message, wherein verifying the data valid message comprises: tracking the cache entry during an ERAT update; and upon completion of the ERAT update, encountering an ERAT hit for the effective address in response to relaunching the load instruction.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Sundeep Chadha, David A. Hrusecky, Elizabeth A. McGlone, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
  • Patent number: 10289578
    Abstract: In an example, a method includes monitoring a memory bus for one or more commands sent by a memory controller to a memory device and determining whether the one or more commands have a value indicating an operation mode of the memory device. Information associated with the one or more commands may be assessed based on the operation mode, and the information may be stored to one or more registers of the memory controller. The operation mode may be a per dynamic random access memory (DRAM) addressability (PDA) mode, a per buffer addressability (PBA) mode, or a per rank mode. Accessing the information may include a first set of configuration values in response to the value indicating the PDA mode or the PBA mode, and a second set of configuration values in response to the value indicating the per rank mode.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: John S. Bialas, Jr., Stephen P. Glancy
  • Patent number: 10282263
    Abstract: A system and method are provided for processing to create distributed volume in a distributed storage system during a failure that has partitioned the distributed volume (e.g. an array failure, a site failure and/or an inter-site network failure). In an embodiment, the system described herein may provide for continuing distributed storage processing in response to I/O requests from a source by creating the local parts of the distributed storage during the failure, and, when the remote site or inter-site network return to availability, the remaining part of the distributed volume is automatically created. The system may include an automatic rebuild to make sure that all parts of the distributed volume are consistent again. The processing may be transparent to the source of the I/O requests.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: May 7, 2019
    Assignee: EMC IP Holding Company LLC
    Inventor: Roel van der Goot
  • Patent number: 10268503
    Abstract: Techniques disclosed herein generally describe providing fault tolerance in a virtual machine cluster using hardware transactional memory. According to one embodiment, a micro-checkpointing tool suspends execution of a virtual machine instance on a primary server. The micro-checkpointing tool identifies one or more memory pages associated with the virtual machine instance that were modified since a previous synchronization. The micro-checkpointing tool maps a first task to an operation to be performed on a memory of the primary server, where the first task is to resume the virtual machine instance. The micro-checkpointing tool also maps a second task to an operation to be performed on the memory of the primary server, where the second task is to copy the identified memory pages associated with the virtual machine instance to a secondary server. The first and second tasks are then performed on the memory.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: April 23, 2019
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, Makoto Ono
  • Patent number: 10242075
    Abstract: A database apparatus may include a database unit configured to store first and second data groups being classified based on a data attribute, a first caching unit associated with the first data group and including a first cache architecture and a second caching unit associated with the second data group and including a second cache architecture.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: March 26, 2019
    Assignee: LG CNS CO., LTD.
    Inventors: Ui Jin Lim, Sung Jun Jung, Jeong Min Ju
  • Patent number: 10241916
    Abstract: Provided are an apparatus, system, and method for sparse superline removal. In response to occupancy of a replacement tracker (RT) exceeding an RT eviction watermark, an eviction process is triggered for evicting a superline from a sectored cache storing at least one superline. An eviction candidate is selected from superlines that have: 1) a sector usage below or equal to a superline low watermark and 2) an RT timestamp that is greater than a superline age watermark.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: March 26, 2019
    Assignee: INTEL CORPORATION
    Inventors: Zvika Greenfield, Zeshan A. Chishti, Israel Diamand
  • Patent number: 10223309
    Abstract: The embodiments described herein describe technologies of dynamic random access memory (DRAM) components for high-performance, high-capacity registered memory modules, such as registered dual in-line memory modules (RDIMMs). One DRAM component may include a set of memory cells and steering logic. The steering logic may include a first data interface and a second data interface. The first and second data interfaces are selectively coupled to a controller component in a first mode and the first data interface is selectively coupled to the controller component in a second mode and the second data interface is selectively coupled to a second DRAM component in the second mode.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: March 5, 2019
    Assignee: Rambus Inc.
    Inventors: Frederick A. Ware, Ely Tsern, John Eric Linstadt, Thomas J. Giovannini, Kenneth L. Wright
  • Patent number: 10216417
    Abstract: A Solid State Drive (SSD) is disclosed. The SSD may include a flash memory to store data and support for a number of device streams. The SSD may also include an SSD controller to manage reading data from and writing data to the flash memory. The SSD may also include a host interface logic, which may include a receiver to receive the commands associated with software streams from a host, a timer to time a window, a statistics collector to determine values for at least one criterion for the software streams from the commands, a ranker to rank the software streams according to the values, and a mapper to establish a mapping between the software streams and device streams.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: February 26, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hingkwan Huen, Changho Choi
  • Patent number: 10204060
    Abstract: Provided are a computer program product, system, and method for determining memory access categories to use to assign tasks to processor cores to execute. A computer system has a plurality of cores, each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory. At task is processed to determine one of the cores on which to dispatch the task. A memory access category of a plurality of memory access categories is determined to which the processed task is assigned. The processed task is dispatched to the core assigned the determined memory access category.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: February 12, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Lokesh M Gupta, Matthew J. Kalos, Trung N. Nguyen
  • Patent number: 10191959
    Abstract: Methods and apparatus for versioned read-only snapshots of shared state for distributed applications are disclosed. A distributed system includes a state manager implementing programmatic interfaces defining caching operations. In response to a cache setup request from a process of a distributed client application, the state manager designates elements of a registry as a cache data set, and provides the client process a reference to an asynchronously updated cache object. The state manager initiates a sequence of asynchronous update notifications to the cache object, wherein each notification includes (a) updated contents of an element of the cache data set, and (b) a cache version identifier based at least in part on a registry logical timestamp value indicative of a time at which the element was updated.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: January 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Long X. Nguyen, Allan H. Vermeulen
  • Patent number: 10185560
    Abstract: An apparatus is described that includes an execution unit having a multiply add computation unit, a first ALU logic unit and a second ALU logic unit. The ALU unit is to perform first, second, third and fourth instructions. The first instruction is a multiply add instruction. The second instruction is to perform parallel ALU operations with the first and second ALU logic units operating simultaneously to produce different respective output resultants of the second instruction. The third instruction is to perform sequential ALU operations with one of the ALU logic units operating from an output of the other of the ALU logic units to determine an output resultant of the third instruction. The fourth instruction is to perform an iterative divide operation in which the first ALU logic unit and the second ALU logic unit operate during to determine first and second division resultant digit values.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: January 22, 2019
    Assignee: Google LLC
    Inventors: Artem Vasilyev, Jason Rupert Redgrave, Albert Meixner, Ofer Shacham
  • Patent number: 10181171
    Abstract: A technique to share execution resources. In one embodiment, a CPU and a GPU share resources according to workload, power considerations, or available resources by scheduling or transferring instructions and information between the CPU and GPU.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: January 15, 2019
    Assignee: Intel Corporation
    Inventors: Eric Sprangle, Matt Craighead, Chris Goodman, Belliappa Kuttanna
  • Patent number: 10175910
    Abstract: Implementations of the present disclosure involve a system and/or method for gracelessly rebooting a storage appliance. The method and system includes a storage appliance in association with an event that will result in the loss of a state table from volatile memory that halts changes to at least one state table of the storage appliance. The state tables describe a plurality of file system states of one or more clients connected to the first storage appliance. The state information is written to a persistent memory of the storage appliance. The state table may then be repopulated using the state table information stored in persistent memory.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: January 8, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Jeffrey Anderson Smith, Piyush Shivam, William Eugene Baker
  • Patent number: 10169240
    Abstract: Systems and methods for managing memory access bandwidth include a spatial locality predictor. The spatial locality predictor includes a memory region table with prediction counters associated with memory regions of a memory. When cache lines are evicted from a cache, the sizes of the cache lines which were accessed by a processor are used for updating the prediction counters. Depending on values of the prediction counters, the sizes of cache lines which are likely to be used by the processor are predicted for the corresponding memory regions. Correspondingly, the memory access bandwidth between the processor and the memory may be reduced to fetch a smaller size data (e.g., half cache line) than a full cache line if the size of the cache line likely to be used is predicted to be less than that of the full cache line. Prediction counters may be incremented or decremented by different amounts depending on access bits corresponding to portions of a cache line.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: January 1, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Brandon Harley Anthony Dwiel, Harold Wade Cain, III, Shivam Priyadarshi
  • Patent number: 10162757
    Abstract: A distributed shared-memory system includes several nodes that each have one or more processor cores, caches, local main memory, and a directory. Each node further includes predictors that use historical memory access information to predict future coherence permission requirements and speculatively initiate coherence operations. In one embodiment, predictors are included at processor cores for monitoring a memory access stream (e.g., historical sequence of memory addresses referenced by a processor core) and predicting addresses of future accesses. In another embodiment, predictors are included at the directory of each node for monitoring memory access traffic and coherence-related activities for individual cache lines to predict future demands for particular cache lines. In other embodiments, predictors are included at both the processor cores and directory of each node.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: December 25, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nuwan Jayasena, Yasuko Eckert
  • Patent number: 10146688
    Abstract: An embodiment of a cache apparatus may include a first cache memory, a second cache memory, and a cache controller communicatively coupled to the first cache memory and the second cache memory to allocate cache storage for clean data from one of either the first cache memory or the second cache memory, and allocate cache storage for dirty data from both the first cache memory and the second cache memory. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: December 4, 2018
    Assignee: Intel Corporation
    Inventors: Maciej Kaminski, Andrzej Jakowski, Piotr Wysocki
  • Patent number: 10148593
    Abstract: A first example provides a circuit configured to operate in four modes. A first mode includes propagating data from a first terminal of the circuit to a second terminal of the circuit. A second mode includes propagating data from the second terminal of the circuit to the first terminal of the circuit. A third mode includes storing data received by the first terminal. A fourth mode includes storing data received by the second terminal. A second example provides a circuit configured to cause one or more communication links to operate in one of two modes based on data traffic detected on the one or more communication links. The first mode includes propagating data from a first router to a second router. The second mode includes propagating data to the first router from the second router.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: December 4, 2018
    Assignees: Ohio University, Arizona Board of Regents on Behalf of the University of Arizona
    Inventors: Avinash Karanth Kodi, Dominic Ditomaso, Ahmed Louri
  • Patent number: 10114749
    Abstract: A cache memory system is provided. The cache memory system includes multiple upper level caches and a current level cache. Each upper level cache includes multiple cache lines. The current level cache includes an exclusive tag random access memory (Exclusive Tag RAM) and an inclusive tag random access memory (Inclusive Tag RAM). The Exclusive Tag RAM is configured to preferentially store an index address of a cache line that is in each upper level cache and whose status is unique dirty (UD). The Inclusive Tag RAM is configured to store an index address of a cache line that is in each upper level cache and whose status is unique clean (UC), shared clean (SC), or shared dirty (SD).
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: October 30, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zhenxi Tu, Jing Xia
  • Patent number: 10114577
    Abstract: A data reading method, a data processing device, and a data processing system are provided. The method, executed by a first control node, includes receiving a reading message forwarded by a data switching device, where the reading message is used to instruct the first control node to read first data, and the reading message is sent by a second control node to the data switching device; if a data status identifier of the first data in a first storage node is a valid identifier, reading the first data from the first storage node, and sending the read first data to the data switching device, so that the data switching device forwards the read first data to the second control node, where the valid identifier indicates that the first data on the first storage node is available. The present application ensures that the latest first data in the node group is read.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: October 30, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Hongkuan Liu, Laijun Zhong, Jiang Tan
  • Patent number: 10102179
    Abstract: A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: October 16, 2018
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: John Shalf, David Donofrio, Leonid Oliker
  • Patent number: 10083126
    Abstract: An apparatus and method are provided for avoiding conflicting entries in a storage structure. The apparatus comprises a storage structure having a plurality of entries for storing data, and allocation circuitry, responsive to a trigger event for allocating new data into the storage structure, to determine a victim entry into which the new data is to be stored, and to allocate the new data into the victim entry upon determining that the new data is available. Conflict detection circuitry is used to detect when the new data will conflict with data stored in one or more entries of the storage structure, and to cause the data in said one or more entries to be invalidated. The conflict detection circuitry is arranged to perform, prior to a portion of the new data required for conflict detection being available, at least one initial stage detection operation to determine, based on an available portion of the new data, candidate entries whose data may conflict with the new data.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: September 25, 2018
    Assignee: ARM Limited
    Inventors: Richard F Bryant, Max John Batley, Lilian Atieno Hutchins, Sujat Jamil
  • Patent number: 10084854
    Abstract: The present disclosure is directed to reducing response latency in fixed allocation content selection infrastructure. An allocator engine selects a content campaign for offline selection based on an allocation metric for the content campaign. A load balancer component identifies, in a distributed computing environment and based on resource utilization information, a computation resource and a time window during which to launch the offline selection. A content selector component launches, during the time window, the offline selection and generates candidate impression criteria. The content selector component receives a request for content via a computer network. Responsive to the request matching the candidate impression criteria, the content selector component disables a real-time selection for the request. The content selector component transmits instructions to render a content item object corresponding to the matching candidate impression criteria generated during the offline selection.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: September 25, 2018
    Assignee: Google LLC
    Inventors: Justin Lewis, Gavin James
  • Patent number: 10061719
    Abstract: A plurality of completed writes to memory are identified corresponding to a plurality of write requests from a host device received over a buffered memory interface. A completion packet is sent to the host device that includes a plurality of write completions to correspond to the plurality of completed writes.
    Type: Grant
    Filed: December 25, 2014
    Date of Patent: August 28, 2018
    Assignee: Intel Corporation
    Inventors: Brian S. Morris, Jeffrey C. Swanson, Bill Nale, Robert G. Blankenship, Jeff Willey, Eric L. Hendrickson
  • Patent number: 10057447
    Abstract: A content processing apparatus includes a content processing device, a controller and an apparatus memory storing therein a program including an analysis module, a first obtaining module and a second obtaining module. The analysis module causes the apparatus to perform an extraction processing, a first determination processing, a first obtaining processing, a second obtaining processing, a second determination processing, a display processing, a reception processing and an operation instruction processing. The first obtaining module causes the content processing apparatus to perform a transmission processing, a reception processing and a first transfer processing. The second obtaining module causes the content processing apparatus to perform an obtaining processing and a second transfer processing.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: August 21, 2018
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventor: Tetsuya Okuno
  • Patent number: 10055150
    Abstract: In an embodiment of the invention, a method comprises: requesting an update on a control data in at least one flash block in a storage memory; replicating, from the storage memory to a cache memory, the control data to be updated; moving a clean cache link list to a dirty cache link list so that the dirty cache link list is changed to reflect the update on the control data; and moving the dirty cache link list to a for-flush link list and writing an updated control data from the for-flush link list to a free flash page in the storage memory.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: August 21, 2018
    Assignee: BiTMICRO Networks, Inc.
    Inventors: Marvin Dela Cruz Fenol, Jik-Jik Oyong Abad, Precious Nezaiah Umali Pestano
  • Patent number: 10051079
    Abstract: A method and apparatus for utilizing a session service cache to provide a session to a client device are provided. In the method and apparatus, a cache is populated with a plurality of aspects of data pertaining to a communication session between a session service and the client device. A request to retrieve an aspect of the data is received from a backend service and the backend service is identified based at least in part on the request. The aspect of the plurality of aspects corresponding to the backend service is retrieved and provided to the backend service.
    Type: Grant
    Filed: November 13, 2014
    Date of Patent: August 14, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Christopher Vincent Kaukl, Geoffrey Scott Pare, Mohanish Hemant Kulkarni
  • Patent number: 10037281
    Abstract: An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: July 31, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Pradeep Bisht, Jiurong Cheng
  • Patent number: 10037211
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices and a plurality of load/store slices, where each load/store slice includes a load miss queue and a load reorder queue, includes: receiving, at a load reorder queue, a load instruction requesting data; responsive to the data not being stored in a data cache, determining whether a previous load instruction is pending a fetch of a cache line comprising the data; if the cache line does not comprise the data, allocating an entry for the load instruction in the load miss queue; and if the cache line does comprise the data: merging, in the load reorder queue, the load instruction with an entry for the previous load instruction.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: July 31, 2018
    Assignee: International Business Machines Corporation
    Inventors: Kimberly M. Fernsler, David A. Hrusecky, Hung Q. Le, Elizabeth A. McGlone, Brian W. Thompto
  • Patent number: 10019367
    Abstract: A method includes outputting, at a processor, a command and an address to the memory module, receiving match/unmatch bits indicating results of comparing a tag corresponding to the address with tags stored in the memory module, from the memory module, determining, at the processor, a cache hit/miss from the match/unmatch bits by using majority voting, and outputting, at the processor, the determined cache hit/miss to the memory module.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: July 10, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seongil O, Chankyung Kim, Jongpil Son
  • Patent number: 10019371
    Abstract: A system and method for retrieving cached data are disclosed herein. The system includes a cache server including a local memory and a table residing on the local memory, wherein the table is used to identify data objects corresponding to cached data. The system also includes the data objects residing on the local memory, wherein the data objects contain pointers to the cached data. The system further includes a remote memory communicatively coupled to the cache server through an Input-Output (I/O) connection, wherein the cached data resides on the remote memory.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: July 10, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kevin T. Lim, Alvin AuYoung
  • Patent number: 10019370
    Abstract: A computer cache memory organization called Probabilistic Set Associative Cache (PAC) has the hardware complexity and latency of a direct-mapped cache but functions as a set-associative cache for a fraction of the time, thus yielding better than direct mapped cache hit rates. The organization is considered a (1+P)-way set associative cache, where the chosen parameter called Override Probability P determines the average associativity, for example, for P=0.1, effectively it operates as if a 1.1-way set associative cache.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: July 10, 2018
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, John S. Dodson, Moinuddin K. Qureshi, Balaram Sinharoy
  • Patent number: 10007523
    Abstract: In a decode stage of hardware processor pipeline, one particular instruction of a plurality of instructions is decoded. It is determined that the particular instruction requires a memory access. Responsive to such determination, it is predicted whether the memory access will result in a cache miss. The predicting in turn includes accessing one of a plurality of entries in a pattern history table stored as a hardware table in the decode stage. The accessing is based, at least in part, upon at least a most recent entry in a global history buffer. The pattern history table stores a plurality of predictions. The global history buffer stores actual results of previous memory accesses as one of cache hits and cache misses.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: June 26, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vijayalakshmi Srinivasan, Brian R. Prasky