Cache Bypassing Patents (Class 711/138)
  • Patent number: 11934317
    Abstract: Systems, apparatuses, and methods for memory management are described. For example, these may include a first memory level including memory pages in a memory array, a second memory level including a cache, a pre-fetch buffer, or both, and a memory controller that determines state information associated with a memory page in the memory array targeted by a memory access request. The state information may include a first parameter indicative of a current activation state of the memory page and a second parameter indicative of statistical likelihood (e.g., confidence) that a subsequent memory access request will target the memory page. The memory controller may disable storage of data associated with the memory page in the second memory level when the first parameter associated with the memory page indicates that the memory page is activated and the second parameter associated with the memory page is greater than or equal to a threshold.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: March 19, 2024
    Inventor: David Andrew Roberts
  • Patent number: 11914740
    Abstract: A data generalization apparatus that can perform generalization processing on large-scale data at high speed using only a primary storage device of a small capacity. Included is a rearrangement unit that rearranges the attribute values in a secondary storage device in accordance with an order of arrangement of the attribute values in a generalization hierarchy in the secondary storage device, an attribute value retrieval unit that retrieves some of the rearranged attribute values from the secondary storage device into a primary storage device, and a generalization hierarchy retrieval unit that retrieves a portion of the generalization hierarchy from the secondary storage device into the primary storage device. Further, there is a generalization processing unit that executes generalization processing based on the attribute values retrieved into the primary storage device and the generalization hierarchy retrieved into the primary storage device, and a re-rearrangement unit.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: February 27, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Satoshi Hasegawa
  • Patent number: 11886354
    Abstract: Techniques are disclosed relating to cache thrash detection. In some embodiments, cache controller circuitry is configured to monitor and track performance metrics across multiple levels of a cache hierarchy, detect cache thrashing based on one or more performance metrics, and modify a cache insertion policy to mitigate cache thrashing. Disclosed techniques may advantageously detect and reduce or avoid cache thrashing, which may increase processor performance, decrease power consumption for a given workload, or both, relative to traditional techniques.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: January 30, 2024
    Assignee: Apple Inc.
    Inventors: Anwar Q. Rohillah, Tyler J. Huberty
  • Patent number: 11847055
    Abstract: A technical solution to the technical problem of how to reduce the undesirable side effects of offloading computations to memory uses read hints to preload results of memory-side processing into a processor-side cache. A cache controller, in response to identifying a read hint in a memory-side processing instruction, causes results of the memory-side processing to be preloaded into a processor-side cache. Implementations include, without limitation, enabling or disabling the preloading based upon cache thrashing levels, preloading results, or portions of results, of memory-side processing to particular destination caches, preloading results based upon priority and/or degree of confidence, and/or during periods of low data bus and/or command bus utilization, last stores considerations, and enforcing an ordering constraint to ensure that preloading occurs after memory-side processing results are complete.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: December 19, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Shaizeen Aga, Nuwan Jayasena
  • Patent number: 11829636
    Abstract: A method comprising directing, via a memory manager, an address associated with data to be written to a cold memory map, receiving the data at a memory device, and writing the data to the memory device in response to the memory manager identifying the data as cold data in response to writing the address associated with the data to the cold memory map.
    Type: Grant
    Filed: September 1, 2021
    Date of Patent: November 28, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Robert M. Walker
  • Patent number: 11809323
    Abstract: Apparatus and method for maintaining real-time coherency between a local cache of a target device and a client cache of a source device during execution of a distributed computational function. In some embodiments, a source device, such as a host computer, is coupled via a network interface to a target device, such as a data storage device. A storage compute function (SCF) command is transferred from the source device to the target device. A local cache of the target device accumulates output data during the execution of an associated SCF over an execution time interval. Real-time coherency is maintained between the contents of the local cache and a client cache of the source device, so that the client cache retains continuously updated copies of the contents of the local cache during execution of the SCF. The coherency can be carried out on a time-based granularity or an operational granularity.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: November 7, 2023
    Assignee: Seagate Technology LLC
    Inventors: Marc Timothy Jones, David Jerome Allen, Steven Williams, Jason Matthew Feist
  • Patent number: 11789512
    Abstract: A processor may identify that an external power source has begun powering a computing device. The processor may identify computational data in a volatile memory of the computing device. The processor may determine that the external power source does not have sufficient energy capacity to provide the computing device enough power to process the computational data at a first I/O throttling rate. The processor may increase the first I/O throttling rate to a second I/O throttling rate. The second I/O throttling rate may allow the computational data to be processed by the computing device with the energy capacity of the external power source.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: October 17, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kushal Patel, Sandeep R. Patil, Sarvesh Patel
  • Patent number: 11782838
    Abstract: Techniques for prefetching are provided. The techniques include receiving a first prefetch command; in response to determining that a history buffer indicates that first information associated with the first prefetch command has not already been prefetched, prefetching the first information into a memory; receiving a second prefetch command; and in response to determining that the history buffer indicates that second information associated with the second prefetch command has already been prefetched, avoiding prefetching the second information into the memory.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Anirudh R. Acharya, Alexander Fuad Ashkar
  • Patent number: 11722382
    Abstract: In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: August 8, 2023
    Assignee: Intel Corporation
    Inventors: Mrittika Ganguli, Muthuvel M. I, Ananth S. Narayan, Jaideep Moses, Andrew J. Herdrich, Rahul Khanna
  • Patent number: 11709822
    Abstract: A technique for managing a datapath of a data storage system includes receiving a request to access target data and creating a transaction that includes multiple datapath elements in a cache, where the datapath elements are used for accessing the target data. In response to detecting that one of the datapath elements is invalid, the technique further includes processing the transaction in a rescue mode. The rescue mode attempts to replace each invalid datapath element of the transaction with a valid version thereof obtained from elsewhere in the data storage system. The technique further includes committing the transaction as processed in the rescue mode.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: July 25, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Vamsi K. Vankamamidi, Geng Han, Xinlei Xu, Philippe Armangau, Vikram Prabhakar
  • Patent number: 11704063
    Abstract: An embodiment may involve a network interface module; volatile memory configured to temporarily store data packets received from the network interface module; high-speed non-volatile memory; an interface connecting to low-speed non-volatile memory; a first set of processors configured to perform a first set of operations that involve: (i) reading the data packets from the volatile memory, (ii) arranging the data packets into chunks, each chunk containing a respective plurality of the data packets, and (iii) writing the chunks to the high-speed non-volatile memory; and a second set of processors configured to perform a second set of operations in parallel to the first set of operations, where the second set of operations involve: (i) reading the chunks from the high-speed non-volatile memory, (ii) compressing the chunks, (iii) arranging the chunks into blocks, each block containing a respective plurality of the chunks, and (iv) writing the blocks to the low-speed non-volatile memory.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: July 18, 2023
    Assignee: fmad engineering kabushiki gaisha
    Inventor: Aaron Foo
  • Patent number: 11689559
    Abstract: A method includes: receiving, by a computer, a user input corresponding to selection of a link associated with an address; determining, by the computer, that the address would not fit in an address bar of a browser displayed on a screen of the computer; and based on the determination that the address would not fit in the address bar of the browser, displaying, by the computer, in the address bar of the browser, a first element of the address and at least part of a second element of the address, including displaying a first portion of the second element of the address and an ellipsis indication representing a second portion of the second element of the address. The display of the first element of the address is visually distinguished from the display of the first portion of the second element of the address.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: June 27, 2023
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Aaron T. Emigh, James A. Roskind
  • Patent number: 11663135
    Abstract: A fabric controller to provide a coherent accelerator fabric, including: a host interconnect to communicatively couple to a host device; a memory interconnect to communicatively couple to an accelerator memory; an accelerator interconnect to communicatively couple to an accelerator having a last-level cache (LLC); and an LLC controller configured to provide a bias check for memory access operations.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: May 30, 2023
    Assignee: Intel Corporation
    Inventors: Ritu Gupta, Aravindh V. Anantaraman, Stephen R. Van Doren, Ashok Jagannathan
  • Patent number: 11526449
    Abstract: A processing system limits the propagation of unnecessary memory updates by bypassing writing back dirty cache lines to other levels of a memory hierarchy in response to receiving an indication from software executing at a processor of the processing system that the value of the dirty cache line is dead (i.e., will not be read again or will not be read until after it has been overwritten). In response to receiving an indication from software that data is dead, a cache controller prevents propagation of the dead data to other levels of memory in response to eviction of the dead data or flushing of the cache at which the dead data is stored.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: December 13, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Johnathan Alsop, Pouya Fotouhi, Bradford Beckmann, Sergey Blagodurov
  • Patent number: 11514015
    Abstract: Techniques are disclosed relating to providing and using probabilistic data structures to at least reduce requests between database nodes. In various embodiments, a first database node processes a database transaction that involves writing a set of database records to an in-memory cache of the first database node. As part of processing the database transaction, the first database node may insert, in a set of probabilistic data structures, a set of database keys that correspond to the set of database records. The first database node may send, to a second database node, the set of probabilistic data structures to enable the second database node to determine whether to request, from the first database node, a database record associated with a database key.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: November 29, 2022
    Assignee: salesforce.com, inc.
    Inventors: Atish Agrawal, Jameison Bear Martin
  • Patent number: 11449444
    Abstract: An address space field is used in conjunction with a normal address field to allow indication of an address space for the particular address value. In one instance, one address space value is used to indicate the bypassing of the address translation used between address spaces. A different address space value is designated for conventional operation, where address translations are performed. Other address space values are used to designate different transformations of the address values or the data. This technique provides a simplified format for handling address values and the like between different devices having different address spaces, simplifying overall computer system design and operation.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: September 20, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Brian Karguth, Chuck Fuoco, Chunhua Hu, Todd Christopher Hiers
  • Patent number: 11409643
    Abstract: Techniques for determining worst-case execution time for at least one application under test are disclosed using memory thrashing. Memory thrashing simulates shared resource interference. Memory that is thrashed includes mapped memory, and optionally shared cache memory.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: August 9, 2022
    Assignee: Honeywell International Inc
    Inventors: Pavel Zaykov, Larry James Miller, Srivatsan Varadarajan
  • Patent number: 11397691
    Abstract: A technique for accessing a memory having a high latency portion and a low latency portion is provided. The technique includes detecting a promotion trigger to promote data from the high latency portion to the low latency portion, in response to the promotion trigger, copying cache lines associated with the promotion trigger from the high latency portion to the low latency portion, and in response to a read request, providing data from either or both of the high latency portion or the low latency portion, based on a state associated with data in the high latency portion and the low latency portion.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: July 26, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Apostolos Kokolis, Shrikanth Ganapathy
  • Patent number: 11380376
    Abstract: An exemplary memory is configurable to operate in a low latency mode through use of a low latency register circuit to execute a read or write command, rather performing a memory army access to execute the read or write command. A control circuit determines whether an access command should be performed using the low latency mode of operation (e.g., first mode of operation) or a normal mode of operation (e.g., second mode of operation). In some examples, a processor unit directs the memory to execute an access command using the low latency mode of operation via one or more bits (e.g., a low latency enable bit) included in the command and address information.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: July 5, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Yuan He, Daigo Toyama
  • Patent number: 11354127
    Abstract: A computing system includes a memory controller having a plurality of bypass parameters set by a software program, a thresholds matrix to store threshold values selectable by the plurality of bypass parameters, and a bypass function to determine whether a first cache line is to be displaced with a second cache line in a first memory or the first cache line remains in the first memory and the second cache line is to be accessed by at least one of a processor core and the cache from a second memory.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: June 7, 2022
    Assignee: INTEL CORPORATION
    Inventors: Harshad S. Sane, Anup Mohan, Kshitij A. Doshi, Mark A. Schmisseur
  • Patent number: 11354454
    Abstract: An apparatus and method of providing direct access to a non-volatile memory of a non-volatile memory device and detecting potential security violations are provided. A method for providing access to a non-volatile memory of a non-volatile memory device may include tracking a parameter related to a plurality of direct access transactions of the non-volatile memory. A threshold behavior pattern of the host activity may be determined based upon the tracked parameters. The direct access transactions may be reviewed to determine whether the threshold behavior pattern is exceeded.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: June 7, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Alon Marcu, Ariel Navon, Shay Benisty
  • Patent number: 11347748
    Abstract: Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising receiving an event, such as a streaming event, from a client. The method determines that the event comprises a primary event and, if so, writes the primary event to a cache and returning the primary event to the client. The method later receives a second event from the client, the second event associated with the first event, annotates the second event based on the primary event, and returns the annotated second event to the client.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 31, 2022
    Assignee: YAHOO ASSETS LLC
    Inventors: David Willcox, Maulik Shah, Allie K. Watfa, George Aleksandrovich
  • Patent number: 11321245
    Abstract: A cache controller applies an aging policy to a portion of a cache based on access metrics for different test regions of the cache, whereby each test region implements a different aging policy. The aging policy for each region establishes an initial age value for each entry of the cache, and a particular aging policy can set the age for a given entry based on whether the entry was placed in the cache in response to a demand request from a processor core or in response to a prefetch request. The cache controller can use the age value of each entry as a criterion in its cache replacement policy.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: May 3, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Paul Moyer
  • Patent number: 11262942
    Abstract: The present disclosure relates to the field of solid-state data storage, and particularly to improving the speed performance and reducing the cost of solid-state data storage devices. A host-managed data storage system according to embodiments includes a set of storage devices, each storage device including a write buffer and memory; and a host coupled to the set of storage devices, the host including: a storage device management module for managing data storage functions for each storage device; memory including: a front-end write buffer; a first mapping table for data stored in the front-end write buffer; and a second mapping table for data stored in the memory of each storage device.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: March 1, 2022
    Assignee: SCALEFLUX, INC.
    Inventors: Qi Wu, Wentao Wu, Thad Omura, Yang Liu, Tong Zhang
  • Patent number: 11157303
    Abstract: A processor may include a register to store a bus-lock-disable bit and an execution unit to execute instructions. The execution unit may receive an instruction that includes a memory access request. The execution may further determine that the memory access request requires acquiring a bus lock, and, responsive to detecting that the bus-lock-disable bit indicates that bus locks are disabled, signal a fault to an operating system.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Gilbert Neiger, Arumugam Thiyagarajah
  • Patent number: 11132143
    Abstract: A storage device includes a nonvolatile memory device that includes a plurality of memory blocks, and a controller that uses some memory blocks of the plurality of memory blocks as a buffer area. Memory blocks storing invalid data from among the some memory blocks are invalid memory blocks, and the controller identifies memory blocks, of which an elapsed time after erase is greater than a reuse time, from among the invalid memory blocks as an available buffer size, and provides the available buffer size to an external host device.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: September 28, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young Joon Jang, Chun-Um Kong, Ohchul Kwon, Junki Kim, Hyung-Kyun Byun
  • Patent number: 11068538
    Abstract: Techniques herein are for navigation data structures for graph traversal. In an embodiment, navigation data structures that a computer stores include: a source vertex array of vertices; a neighbor array of dense identifiers of target vertices terminating edges; a bidirectional map associating, for each vertex, a sparse identifier of the vertex with a dense identifier of the vertex; and a vertex array containing, when a dense identifier of a source vertex is used as an offset, a pair of offsets defining an offset range, for use with the neighbor array. The source vertex array, using the dense identifier of a particular vertex as an offset, contains an offset, into a neighbor array, of a target vertex terminating an edge originating at the particular vertex. The neighbor array contiguously stores dense identifiers of target vertices terminating edges originating from a same source vertex.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: July 20, 2021
    Assignee: Oracle International Corporation
    Inventors: Michael Haubenschild, Sungpack Hong, Hassan Chafi, Korbinian Schmid, Martin Sevenich, Alexander Weld
  • Patent number: 10976935
    Abstract: A method and apparatus for assigning an allocated workload in a data center having multiple storage systems includes selecting one or more storage systems to be assigned the allocated workload based on a combination of performance impact scores and deployment scores. By considering both performance impact and deployment effort, the allocated workload is able to be assigned with a view not only toward storage system performance, but also with a view toward how deployment on a particular storage system would comply with data center policies and the amount of configuration effort it would take to enable the workload to be implemented on the target storage system. This enables workloads to be allocated within the data center while minimizing the required amount of configuration or reconfiguration required to implement the workload allocation within the data center.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: April 13, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Jason McCarthy, Girish Warrier, Rongnong Zhou
  • Patent number: 10924448
    Abstract: A method for retrieving content on a network comprising a first device and a second device is described. The method includes receiving in the network a request for content from the first device, the request identifying the content using an IPv6 address for the content, and determining whether the content is stored in a cache of the second device. Upon determining the content is stored in the cache of the second device, a request is sent to the second device for the content using the IPv6 address of the content. The content is forwarded to the first device from the second device, wherein the first and second devices are part of the same layer 2 domain. Methods of injecting content to a home network and packaging content are also described.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: February 16, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: David Ward, William Mark Townsley, Andre Surcouf
  • Patent number: 10895597
    Abstract: Systems, apparatuses, and methods for implementing debug features on a secure coprocessor to handle communication and computation between a debug tool and a debug target are disclosed. A debug tool generates a graphical user interface (GUI) to display debug information to a user for help in debugging a debug target such as a system on chip (SoC). A secure coprocessor is embedded on the debug target, and the secure coprocessor receives debug requests generated by the debug tool. The secure coprocessor performs various computation tasks and/or other operations to prevent multiple round-trip messages being sent back and forth between the debug tool and the debug target. The secure coprocessor is able to access system memory and determine a status of a processor being tested even when the processor becomes unresponsive.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: January 19, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Tan Peng, Dong Zhu
  • Patent number: 10884924
    Abstract: A storage system receives a write request which specifies a logical volume address associated with a RAID group, and makes a first determination whether write target data in accordance with the write request exists in a cache memory. When the first determination result is negative, the storage system makes a second determination whether at least one of one or more conditions is met, the condition being that random write throughput performance is expected to increase by asynchronous de-staging processing of storing the write target data in the RAID group asynchronously to write processing performed in response to the write request. When the second determination result is negative, the storage system selects, for the write request, synchronous storage processing, which is processing of storing the write target data in the RAID group in the write processing and for which a load on a processor is lower than the asynchronous de-staging processing.
    Type: Grant
    Filed: March 4, 2015
    Date of Patent: January 5, 2021
    Assignee: HITACHI, LTD.
    Inventors: Shintaro Ito, Akira Yamamoto, Ryosuke Tatsumi, Takanobu Suzuki
  • Patent number: 10853522
    Abstract: A communications device has a first communications port via which secure messages are received, and a second communications port via which non-secure messages are received. In response to detecting that a secure message has been received, the device determines whether the second communications port is in a state that enables non-secure messages to be received. If the second communications port is in the enabled state, the device autonomously disables the second communications port to preclude non-secure messages received at that port from being processed.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: December 1, 2020
    Assignee: ITRON NETWORKED SOLUTIONS, INC.
    Inventors: Thomas Luecke, Nelson Bolyard, Winston Lew
  • Patent number: 10747675
    Abstract: Embodiments of the present disclosure generally relate to a method and device for managing caches. In particular, the method may include in response to receiving a request to write data to the cache, determining the amount of data to be written. The method may further include in response to the amount of the data exceeding a threshold amount, skipping writing data to the cache and writing the data to a lower level storage of the cache. Corresponding systems, apparatus and computer program products are also provided.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: August 18, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Lester Zhang, Denny Dengyu Wang, Chen Gong, Geng Han, Joe Liu, Leon Zhang
  • Patent number: 10725688
    Abstract: A memory system includes a memory controller, a first memory module including first and second groups of first memory chips, a second memory module including first and second groups of second memory chips, and a channel including a first group of signal lines suitable for coupling the memory controller with the first memory module, and a second group of signal lines suitable for coupling the memory controller with the second memory module.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: July 28, 2020
    Assignee: SK hynix Inc.
    Inventors: Jae-Han Park, Hyun-Woo Kwack
  • Patent number: 10509587
    Abstract: A coordination point for assigning clients to remote backup storages includes a persistent storage and a processor. The persistent storage stores gateway pool cache capacities of the remote backup storages. The processor obtains a data storage request for data from a client of the clients; obtains an indirect cache estimate for servicing the data storage request; selects a remote backup storage of the remote backup storages based on the obtained indirect cache estimate using the gateway pool cache capacities; and assign the selected remote backup storage to service the data storage request. The selected remote backup storage has a higher client load at a time selection than a second client load of a second remote backup storage of the remote backup storages.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: December 17, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Shelesh Chopra, Gururaj Kulkarni
  • Patent number: 10474375
    Abstract: An integrated circuit includes a processor to execute instructions and to interact with memory, and acceleration hardware, to execute a sub-program corresponding to instructions. A set of input queues includes a store address queue to receive, from the acceleration hardware, a first address of the memory, the first address associated with a store operation and a store data queue to receive, from the acceleration hardware, first data to be stored at the first address of the memory. The set of input queues also includes a completion queue to buffer response data for a load operation. A disambiguator circuit, coupled to the set of input queues and the memory, is to, responsive to determining the load operation, which succeeds the store operation, has an address conflict with the first address, copy the first data from the store data queue into the completion queue for the load operation.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: November 12, 2019
    Assignee: Intel Corporation
    Inventors: Kermin Elliott Fleming, Jr., Simon C. Steely, Jr., Kent D. Glossop
  • Hub
    Patent number: 10474615
    Abstract: A hub including a first connection interface, a second connection interface, and a signal bypass circuit is provided. The first connection interface has a first pin to receive a first connection message. The second connection interface has a second pin to transmit the first connection message. The signal bypass circuit is coupled to the first pin and the second pin to decide whether to bypass the first pin and the second pin based on the first connection message.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: November 12, 2019
    Assignee: Nuvoton Technology Corporation
    Inventors: Shih-Hsuan Yen, Chao-Chiuan Hsu
  • Patent number: 10467138
    Abstract: A processing system includes a first socket, a second socket, and an interface between the first socket and the second socket. A first memory is associated with the first socket and a second memory is associated with the second socket. The processing system also includes a controller for the first memory. The controller is to receive a first request for a first memory transaction with the second memory and perform the first memory transaction along a path that includes the interface and bypasses at least one second cache associated with the second memory.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: November 5, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Paul Blinzer, Ali Ibrahim, Benjamin T. Sander, Vydhyanathan Kalyanasundharam
  • Patent number: 10402327
    Abstract: A non-uniform memory access system includes several nodes that each have one or more processors, caches, local main memory, and a local bus that connects a node's processor(s) to its memory. The nodes are coupled to one another over a collection of point-to-point interconnects, thereby permitting processors in one node to access data stored in another node. Memory access time for remote memory takes longer than local memory because remote memory accesses have to travel across a communications network to arrive at the requesting processor. In some embodiments, inter-cache and main-memory-to-cache latencies are measured to determine whether it would be more efficient to satisfy memory access requests using cached copies stored in caches of owning nodes or from main memory of home nodes.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: September 3, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Roberts, Ehsan Fatehi
  • Patent number: 10402218
    Abstract: A processor may include a register to store a bus-lock-disable bit and an execution unit to execute instructions. The execution unit may receive an instruction that includes a memory access request. The execution may further determine that the memory access request requires acquiring a bus lock, and, responsive to detecting that the bus-lock-disable bit indicates that bus locks are disabled, signal a fault to an operating system.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Gilbert Neiger, Arumugam Thiyagarajah
  • Patent number: 10318445
    Abstract: A processing system in a dispersed storage network is configured to access write sequence information corresponding to a write sequence; determine whether to elevate a priority level of the write sequence; when the processing system determines to elevate the priority level of the write sequence, elevate the priority level of the write sequence; determine whether to lower the priority level of the write sequence; and when the processing system determines to lower the priority level of the write sequence, the processing system lowers the priority level of the write sequence.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: June 11, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Greg R. Dhuse
  • Patent number: 10261793
    Abstract: A particular method includes receiving, at a processor, an instruction and an address of the instruction. The method also includes preventing execution of the instruction based at least in part on determining that the address is within a range of addresses.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: April 16, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mark J. Hickey, Adam J. Muff, Matthew R. Tubbs, Charles D. Wait
  • Patent number: 10248567
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to maintain cache coherency. Examples disclosed herein involve, in response to receiving, from a direct memory access controller, an interrupt associated with a direct memory access operation, handling the interrupt based on a parameter of the direct memory access operation, wherein the direct memory access controller is to execute the direct memory access operation.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: April 2, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Senthil Kumar Ramakrishnan, Eugene Cohen
  • Patent number: 10120872
    Abstract: Several embodiments include a data cache system that implements a data cache and processes content requests for data items that may be in the data cache. The data cache system can receive a content request for at least one data item. The data cache system can update a karma score associated an originator entity of the data item. The originator entity can be a user account that uploaded the data item. When wiping the data cache for more storage space, the data cache system can determine whether to discard the data items based on a cache priority that is computed based, at least partially, on the karma score.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: November 6, 2018
    Assignee: Facebook, Inc.
    Inventors: Neeraj Choubey, Fraidun Akhi, Georgiy Yakovlev, Ray Joseph Tong
  • Patent number: 10089239
    Abstract: Provided are methods, systems, and apparatus for managing and controlling memory caches, in particular, system level caches outside of those closest to the CPU. The processes and representative hardware structures that implement the processes are designed to allow for detailed control over the behavior of such system level caches. Caching policies are developed based on policy identifiers, where a policy identifier corresponds to a collection of parameters that control the behavior of a set of cache management structures. For a given cache, one policy identifier is stored in each line of the cache.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: October 2, 2018
    Assignee: Google LLC
    Inventors: Allan D. Knies, Shinye Shiu, Chih-Chung Chang, Vyacheslav Vladimirovich Malyugin, Santhosh Rao
  • Patent number: 10090028
    Abstract: Provided is a memory control technique for avoiding that the issue of a refresh command and the issue of a calibration command are arranged in succession. The memory control circuit issues a refresh command to make a request for a refresh operation based on a set refresh cycle, and issues a calibration command to make a request for a calibrating operation based on a set calibration cycle, for which the control function of suppressing the issue of the calibration command only for a given time after the issue of the refresh command, and suppressing the issue of the refresh command only for a given time after the issue of the calibration command is adopted.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: October 2, 2018
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Junkei Sato, Nobuhiko Honda
  • Patent number: 10084847
    Abstract: Methods and systems for generating and reusing dynamic web content involve, for example, automatically generating client-side code on a server at run time, and automatically downloading the client-side code to the client side at run time. The client-side code is executed on the client side to become a widget with dynamic behavior attributes displayed as a component of a web page on a display screen of a client-side computing device. Dynamic behavior of the client-side code may triggered via an event handler mechanism wherein properties of the client-side code are dynamically changed without affecting any other content on the web page. The widget may be redisplayed on a subsequent occasion with a change in the widget without regenerating the client-side code.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: September 25, 2018
    Assignee: CITICORP CREDIT SERVICES, INC. (USA)
    Inventors: France Law-How-Hung, Ramadurai V. Ram
  • Patent number: 9996401
    Abstract: A task processing method and virtual machine are disclosed. The method includes selecting an idle resource for a task; creating a global variable snapshot for a global variable; executing the task, in private memory space in the selected idle resource; after the execution of the task is complete, acquiring a new global variable snapshot corresponding to the global variable, and acquiring an updated global variable according to a local global variable snapshot and the new global variable snapshot; and determining whether a synchronization variable of a to-be-executed task in a task synchronization waiting queue includes the current updated global variable, and if the synchronization variable of the to-be-executed task in the task synchronization waiting queue includes the current updated global variable, putting the task into a task execution waiting queue.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: June 12, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lin Gu, Zhiqiang Ma, Zhonghua Sheng, Liufei Wen
  • Patent number: 9996457
    Abstract: Systems and methods are disclosed for efficient buffering for a system having non-volatile memory (“NVM”). In some embodiments, a control circuitry of a system can use heuristics to determine whether to perform buffering of one or more write commands received from a file system. In other embodiments, the control circuitry can minimize read energy and buffering overhead by efficiently re-ordering write commands in a queue along page-aligned boundaries of a buffer. In further embodiments, the control circuitry can optimally combine write commands from a buffer with write commands from a queue. After combining the commands, the control circuitry can dispatch the commands in a single transaction.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: June 12, 2018
    Assignee: APPLE INC.
    Inventors: Daniel J. Post, Nir Jacob Wakrat
  • Patent number: 9965395
    Abstract: The level one memory controller maintains a local copy of the cacheability bit of each memory attribute register. The level two memory controller is the initiator of all configuration read/write requests from the CPU. Whenever a configuration write is made to a memory attribute register, the level one memory controller updates its local copy of the memory attribute register.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: May 8, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Raguram Damodaran, Joseph Raymond Michael Zbiciak, Naveen Bhoria