Cache Bypassing Patents (Class 711/138)
-
Patent number: 12174698Abstract: An apparatus for on demand access and cache encoding of repair data. In one embodiment the apparatus includes an integrated circuit having a data cache in data communication with a non-volatile memory, a controller of a built-in self-test-and-repair (BISTR) circuit, and a plurality of registers. The controller is configured to read data from the data cache and store it into a first of the plurality of registers.Type: GrantFiled: March 24, 2022Date of Patent: December 24, 2024Assignee: Cypress Semiconductor CorporationInventor: Senwen Kan
-
Patent number: 12174746Abstract: A data processing method, device, and storage medium that reads a parallel control code; reads, according to the parallel control code, first data that has been cached in a data cache space, processes the read first data, and outputs the processed first data to the data cache space; and simultaneously moves second data from a data storage space to the data cache space according to the parallel control code, the second data being the next data of the first data. Data processing and data moving are performed simultaneously according to the parallel control code, to reduce a duration of the data processing waiting for the data moving, thereby improving a processing speed and processing efficiency.Type: GrantFiled: October 15, 2021Date of Patent: December 24, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Yu Meng
-
Patent number: 12164955Abstract: Aspects of the disclosure provide for mechanisms for scheduling computing tasks in a computer system. A method of the disclosure includes determining one or more attributes associated with a computing task, determining an ordered list of the attributes in view of priorities associated with the attributes, generating a first numerical representation of the attributes in view of the ordered list of the attributes, determining a second numerical representation of a priority of the computing task, and determining a third numerical representation of a total priority of the computing task in view of the first numerical representation and the second numerical representation.Type: GrantFiled: March 8, 2021Date of Patent: December 10, 2024Assignee: Red Hat, Inc.Inventors: Nathaniel McCallum, Monis Khan, Benjamin Petersen, Jonathan Toppins
-
Patent number: 12164926Abstract: Disclosed herein is a highly energy-efficient architecture targeting the ultra-low-power sensor domain. The architecture achieves high energy-efficiency while maintaining programmability and generality. The invention introduces vector-dataflow execution, allowing the exploitation of the dataflows in a sequence of vector instructions and to amortize instruction fetch and decode over a whole vector of operations. The vector-dataflow architecture allows the invention to avoid costly vector register file accesses, thereby saving energy.Type: GrantFiled: October 13, 2021Date of Patent: December 10, 2024Assignee: Carnegie Mellon UniversityInventors: Brandon Lucia, Nathan Beckmann, Graham Gobieski
-
Patent number: 12164807Abstract: Systems and methods are disclosed for providing speculative command processing. In certain embodiments, a data storage device includes a non-volatile memory, a buffer, and a controller configured to: receive one or more actual requests for data from one or more hosts, wherein an actual request is associated with data confirmed to be required by an application on a host; receive one or more speculative requests for data from the one or more hosts, wherein a speculative request is associated with data that has not been confirmed to be required by an application on a host; process the one or more actual requests prior to the one or more speculative requests; and in response to determining that resources are available after processing the one or more actual requests, perform preprocessing for the one or more speculative requests.Type: GrantFiled: April 11, 2022Date of Patent: December 10, 2024Assignee: Sandisk Technologies, Inc.Inventor: Ramanathan Muthiah
-
Patent number: 12159057Abstract: Implementing data flows of an application across a memory hierarchy of a data processing array includes receiving a data flow graph specifying an application for execution on the data processing array. A plurality of buffer objects corresponding to a plurality of different levels of the memory hierarchy of the data processing array and an external memory are identified. The plurality of buffer objects specify data flows. Buffer object parameters are determined. The buffer object parameters define properties of the data flows. Data that configures the data processing array to implement the data flows among the plurality of different levels of the memory hierarchy and the external memory is generated based on the plurality of buffer objects and the buffer object parameters.Type: GrantFiled: September 21, 2022Date of Patent: December 3, 2024Assignee: Xilinx, Inc.Inventors: Chia-Jui Hsu, Mukund Sivaraman, Vinod K. Kathail
-
Patent number: 12141078Abstract: A caching system including a first sub-cache, and a second sub-cache coupled in parallel with the first sub-cache; wherein the second sub-cache includes line type bits configured to store an indication that a corresponding line of the second sub-cache is configured to store write-miss data.Type: GrantFiled: May 22, 2020Date of Patent: November 12, 2024Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
-
Patent number: 12141474Abstract: A queue circuit that manages access to a memory circuit in a computer system includes multiple sets of entries for storing access requests. The entries in one set of entries are assigned to corresponding sources that generate access requests to the memory circuit. The entries in the other set of entries are floating entries that can be used to store requests from any of the sources. Upon receiving a request from a particular source, the queue circuit checks the entry assigned to the particular source and, if the entry is unoccupied, the queue circuit stores the request in the entry. If, however, the entry assigned to the particular source is occupied, the queue circuit stores the request in one of the floating entries.Type: GrantFiled: April 29, 2022Date of Patent: November 12, 2024Assignee: Cadence Design Systems, Inc.Inventors: Robert T. Golla, Matthew B. Smittle
-
Patent number: 12132809Abstract: Various embodiments improve the operation of computers by providing methods of transmitting data with low latency and high bandwidth. Data may be transmitted in a packet composed of data flits, the data flits having at least two different formats configured to implement different communication protocols. In some embodiments, a given flit may be transmitted using two different modulation methods, with a first part of the flit transmitted using a first modulation method, such as a binary method, and a second part of the flit using a higher-order modulation method.Type: GrantFiled: June 7, 2021Date of Patent: October 29, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Mark Ronald Sikkink, Randal Steven Passint, Joseph Martin Placek, Russell Leonard Nicol
-
Patent number: 11983115Abstract: A device connected to a host processor via a bus includes: an accelerator circuit configured to operate based on a message received from the host processor; and a controller configured to control an access to a memory connected to the device, wherein the controller is further configured to, in response to a read request received from the accelerator circuit, provide a first message requesting resolution of coherence to the host processor and prefetch first data from the memory.Type: GrantFiled: February 8, 2023Date of Patent: May 14, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jeongho Lee, Heehyun Nam, Jaeho Shin, Hyodeok Shin, Younggeon Yoo, Younho Jeon, Wonseb Jeong, Ipoom Jeong, Hyeokjun Choe
-
Patent number: 11934317Abstract: Systems, apparatuses, and methods for memory management are described. For example, these may include a first memory level including memory pages in a memory array, a second memory level including a cache, a pre-fetch buffer, or both, and a memory controller that determines state information associated with a memory page in the memory array targeted by a memory access request. The state information may include a first parameter indicative of a current activation state of the memory page and a second parameter indicative of statistical likelihood (e.g., confidence) that a subsequent memory access request will target the memory page. The memory controller may disable storage of data associated with the memory page in the second memory level when the first parameter associated with the memory page indicates that the memory page is activated and the second parameter associated with the memory page is greater than or equal to a threshold.Type: GrantFiled: December 6, 2021Date of Patent: March 19, 2024Inventor: David Andrew Roberts
-
Patent number: 11914740Abstract: A data generalization apparatus that can perform generalization processing on large-scale data at high speed using only a primary storage device of a small capacity. Included is a rearrangement unit that rearranges the attribute values in a secondary storage device in accordance with an order of arrangement of the attribute values in a generalization hierarchy in the secondary storage device, an attribute value retrieval unit that retrieves some of the rearranged attribute values from the secondary storage device into a primary storage device, and a generalization hierarchy retrieval unit that retrieves a portion of the generalization hierarchy from the secondary storage device into the primary storage device. Further, there is a generalization processing unit that executes generalization processing based on the attribute values retrieved into the primary storage device and the generalization hierarchy retrieved into the primary storage device, and a re-rearrangement unit.Type: GrantFiled: February 20, 2020Date of Patent: February 27, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventor: Satoshi Hasegawa
-
Patent number: 11886354Abstract: Techniques are disclosed relating to cache thrash detection. In some embodiments, cache controller circuitry is configured to monitor and track performance metrics across multiple levels of a cache hierarchy, detect cache thrashing based on one or more performance metrics, and modify a cache insertion policy to mitigate cache thrashing. Disclosed techniques may advantageously detect and reduce or avoid cache thrashing, which may increase processor performance, decrease power consumption for a given workload, or both, relative to traditional techniques.Type: GrantFiled: May 20, 2022Date of Patent: January 30, 2024Assignee: Apple Inc.Inventors: Anwar Q. Rohillah, Tyler J. Huberty
-
Patent number: 11847055Abstract: A technical solution to the technical problem of how to reduce the undesirable side effects of offloading computations to memory uses read hints to preload results of memory-side processing into a processor-side cache. A cache controller, in response to identifying a read hint in a memory-side processing instruction, causes results of the memory-side processing to be preloaded into a processor-side cache. Implementations include, without limitation, enabling or disabling the preloading based upon cache thrashing levels, preloading results, or portions of results, of memory-side processing to particular destination caches, preloading results based upon priority and/or degree of confidence, and/or during periods of low data bus and/or command bus utilization, last stores considerations, and enforcing an ordering constraint to ensure that preloading occurs after memory-side processing results are complete.Type: GrantFiled: June 30, 2021Date of Patent: December 19, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Shaizeen Aga, Nuwan Jayasena
-
Patent number: 11829636Abstract: A method comprising directing, via a memory manager, an address associated with data to be written to a cold memory map, receiving the data at a memory device, and writing the data to the memory device in response to the memory manager identifying the data as cold data in response to writing the address associated with the data to the cold memory map.Type: GrantFiled: September 1, 2021Date of Patent: November 28, 2023Assignee: Micron Technology, Inc.Inventor: Robert M. Walker
-
Patent number: 11809323Abstract: Apparatus and method for maintaining real-time coherency between a local cache of a target device and a client cache of a source device during execution of a distributed computational function. In some embodiments, a source device, such as a host computer, is coupled via a network interface to a target device, such as a data storage device. A storage compute function (SCF) command is transferred from the source device to the target device. A local cache of the target device accumulates output data during the execution of an associated SCF over an execution time interval. Real-time coherency is maintained between the contents of the local cache and a client cache of the source device, so that the client cache retains continuously updated copies of the contents of the local cache during execution of the SCF. The coherency can be carried out on a time-based granularity or an operational granularity.Type: GrantFiled: June 22, 2022Date of Patent: November 7, 2023Assignee: Seagate Technology LLCInventors: Marc Timothy Jones, David Jerome Allen, Steven Williams, Jason Matthew Feist
-
Patent number: 11789512Abstract: A processor may identify that an external power source has begun powering a computing device. The processor may identify computational data in a volatile memory of the computing device. The processor may determine that the external power source does not have sufficient energy capacity to provide the computing device enough power to process the computational data at a first I/O throttling rate. The processor may increase the first I/O throttling rate to a second I/O throttling rate. The second I/O throttling rate may allow the computational data to be processed by the computing device with the energy capacity of the external power source.Type: GrantFiled: January 8, 2019Date of Patent: October 17, 2023Assignee: International Business Machines CorporationInventors: Kushal Patel, Sandeep R. Patil, Sarvesh Patel
-
Patent number: 11782838Abstract: Techniques for prefetching are provided. The techniques include receiving a first prefetch command; in response to determining that a history buffer indicates that first information associated with the first prefetch command has not already been prefetched, prefetching the first information into a memory; receiving a second prefetch command; and in response to determining that the history buffer indicates that second information associated with the second prefetch command has already been prefetched, avoiding prefetching the second information into the memory.Type: GrantFiled: March 31, 2021Date of Patent: October 10, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Anirudh R. Acharya, Alexander Fuad Ashkar
-
Patent number: 11722382Abstract: In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.Type: GrantFiled: October 5, 2021Date of Patent: August 8, 2023Assignee: Intel CorporationInventors: Mrittika Ganguli, Muthuvel M. I, Ananth S. Narayan, Jaideep Moses, Andrew J. Herdrich, Rahul Khanna
-
Patent number: 11709822Abstract: A technique for managing a datapath of a data storage system includes receiving a request to access target data and creating a transaction that includes multiple datapath elements in a cache, where the datapath elements are used for accessing the target data. In response to detecting that one of the datapath elements is invalid, the technique further includes processing the transaction in a rescue mode. The rescue mode attempts to replace each invalid datapath element of the transaction with a valid version thereof obtained from elsewhere in the data storage system. The technique further includes committing the transaction as processed in the rescue mode.Type: GrantFiled: May 29, 2020Date of Patent: July 25, 2023Assignee: EMC IP Holding Company LLCInventors: Vamsi K. Vankamamidi, Geng Han, Xinlei Xu, Philippe Armangau, Vikram Prabhakar
-
Patent number: 11704063Abstract: An embodiment may involve a network interface module; volatile memory configured to temporarily store data packets received from the network interface module; high-speed non-volatile memory; an interface connecting to low-speed non-volatile memory; a first set of processors configured to perform a first set of operations that involve: (i) reading the data packets from the volatile memory, (ii) arranging the data packets into chunks, each chunk containing a respective plurality of the data packets, and (iii) writing the chunks to the high-speed non-volatile memory; and a second set of processors configured to perform a second set of operations in parallel to the first set of operations, where the second set of operations involve: (i) reading the chunks from the high-speed non-volatile memory, (ii) compressing the chunks, (iii) arranging the chunks into blocks, each block containing a respective plurality of the chunks, and (iv) writing the blocks to the low-speed non-volatile memory.Type: GrantFiled: May 14, 2021Date of Patent: July 18, 2023Assignee: fmad engineering kabushiki gaishaInventor: Aaron Foo
-
Patent number: 11689559Abstract: A method includes: receiving, by a computer, a user input corresponding to selection of a link associated with an address; determining, by the computer, that the address would not fit in an address bar of a browser displayed on a screen of the computer; and based on the determination that the address would not fit in the address bar of the browser, displaying, by the computer, in the address bar of the browser, a first element of the address and at least part of a second element of the address, including displaying a first portion of the second element of the address and an ellipsis indication representing a second portion of the second element of the address. The display of the first element of the address is visually distinguished from the display of the first portion of the second element of the address.Type: GrantFiled: April 22, 2021Date of Patent: June 27, 2023Assignee: Huawei Technologies Co., Ltd.Inventors: Aaron T. Emigh, James A. Roskind
-
Patent number: 11663135Abstract: A fabric controller to provide a coherent accelerator fabric, including: a host interconnect to communicatively couple to a host device; a memory interconnect to communicatively couple to an accelerator memory; an accelerator interconnect to communicatively couple to an accelerator having a last-level cache (LLC); and an LLC controller configured to provide a bias check for memory access operations.Type: GrantFiled: December 20, 2021Date of Patent: May 30, 2023Assignee: Intel CorporationInventors: Ritu Gupta, Aravindh V. Anantaraman, Stephen R. Van Doren, Ashok Jagannathan
-
Patent number: 11526449Abstract: A processing system limits the propagation of unnecessary memory updates by bypassing writing back dirty cache lines to other levels of a memory hierarchy in response to receiving an indication from software executing at a processor of the processing system that the value of the dirty cache line is dead (i.e., will not be read again or will not be read until after it has been overwritten). In response to receiving an indication from software that data is dead, a cache controller prevents propagation of the dead data to other levels of memory in response to eviction of the dead data or flushing of the cache at which the dead data is stored.Type: GrantFiled: August 31, 2020Date of Patent: December 13, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Johnathan Alsop, Pouya Fotouhi, Bradford Beckmann, Sergey Blagodurov
-
Patent number: 11514015Abstract: Techniques are disclosed relating to providing and using probabilistic data structures to at least reduce requests between database nodes. In various embodiments, a first database node processes a database transaction that involves writing a set of database records to an in-memory cache of the first database node. As part of processing the database transaction, the first database node may insert, in a set of probabilistic data structures, a set of database keys that correspond to the set of database records. The first database node may send, to a second database node, the set of probabilistic data structures to enable the second database node to determine whether to request, from the first database node, a database record associated with a database key.Type: GrantFiled: January 30, 2020Date of Patent: November 29, 2022Assignee: salesforce.com, inc.Inventors: Atish Agrawal, Jameison Bear Martin
-
Patent number: 11449444Abstract: An address space field is used in conjunction with a normal address field to allow indication of an address space for the particular address value. In one instance, one address space value is used to indicate the bypassing of the address translation used between address spaces. A different address space value is designated for conventional operation, where address translations are performed. Other address space values are used to designate different transformations of the address values or the data. This technique provides a simplified format for handling address values and the like between different devices having different address spaces, simplifying overall computer system design and operation.Type: GrantFiled: September 3, 2019Date of Patent: September 20, 2022Assignee: Texas Instruments IncorporatedInventors: Brian Karguth, Chuck Fuoco, Chunhua Hu, Todd Christopher Hiers
-
Patent number: 11409643Abstract: Techniques for determining worst-case execution time for at least one application under test are disclosed using memory thrashing. Memory thrashing simulates shared resource interference. Memory that is thrashed includes mapped memory, and optionally shared cache memory.Type: GrantFiled: February 19, 2020Date of Patent: August 9, 2022Assignee: Honeywell International IncInventors: Pavel Zaykov, Larry James Miller, Srivatsan Varadarajan
-
Patent number: 11397691Abstract: A technique for accessing a memory having a high latency portion and a low latency portion is provided. The technique includes detecting a promotion trigger to promote data from the high latency portion to the low latency portion, in response to the promotion trigger, copying cache lines associated with the promotion trigger from the high latency portion to the low latency portion, and in response to a read request, providing data from either or both of the high latency portion or the low latency portion, based on a state associated with data in the high latency portion and the low latency portion.Type: GrantFiled: November 13, 2019Date of Patent: July 26, 2022Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Apostolos Kokolis, Shrikanth Ganapathy
-
Patent number: 11380376Abstract: An exemplary memory is configurable to operate in a low latency mode through use of a low latency register circuit to execute a read or write command, rather performing a memory army access to execute the read or write command. A control circuit determines whether an access command should be performed using the low latency mode of operation (e.g., first mode of operation) or a normal mode of operation (e.g., second mode of operation). In some examples, a processor unit directs the memory to execute an access command using the low latency mode of operation via one or more bits (e.g., a low latency enable bit) included in the command and address information.Type: GrantFiled: August 26, 2020Date of Patent: July 5, 2022Assignee: Micron Technology, Inc.Inventors: Yuan He, Daigo Toyama
-
Patent number: 11354454Abstract: An apparatus and method of providing direct access to a non-volatile memory of a non-volatile memory device and detecting potential security violations are provided. A method for providing access to a non-volatile memory of a non-volatile memory device may include tracking a parameter related to a plurality of direct access transactions of the non-volatile memory. A threshold behavior pattern of the host activity may be determined based upon the tracked parameters. The direct access transactions may be reviewed to determine whether the threshold behavior pattern is exceeded.Type: GrantFiled: June 26, 2020Date of Patent: June 7, 2022Assignee: Western Digital Technologies, Inc.Inventors: Alon Marcu, Ariel Navon, Shay Benisty
-
Patent number: 11354127Abstract: A computing system includes a memory controller having a plurality of bypass parameters set by a software program, a thresholds matrix to store threshold values selectable by the plurality of bypass parameters, and a bypass function to determine whether a first cache line is to be displaced with a second cache line in a first memory or the first cache line remains in the first memory and the second cache line is to be accessed by at least one of a processor core and the cache from a second memory.Type: GrantFiled: July 13, 2020Date of Patent: June 7, 2022Assignee: INTEL CORPORATIONInventors: Harshad S. Sane, Anup Mohan, Kshitij A. Doshi, Mark A. Schmisseur
-
Patent number: 11347748Abstract: Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising receiving an event, such as a streaming event, from a client. The method determines that the event comprises a primary event and, if so, writes the primary event to a cache and returning the primary event to the client. The method later receives a second event from the client, the second event associated with the first event, annotates the second event based on the primary event, and returns the annotated second event to the client.Type: GrantFiled: May 22, 2020Date of Patent: May 31, 2022Assignee: YAHOO ASSETS LLCInventors: David Willcox, Maulik Shah, Allie K. Watfa, George Aleksandrovich
-
Patent number: 11321245Abstract: A cache controller applies an aging policy to a portion of a cache based on access metrics for different test regions of the cache, whereby each test region implements a different aging policy. The aging policy for each region establishes an initial age value for each entry of the cache, and a particular aging policy can set the age for a given entry based on whether the entry was placed in the cache in response to a demand request from a processor core or in response to a prefetch request. The cache controller can use the age value of each entry as a criterion in its cache replacement policy.Type: GrantFiled: November 12, 2019Date of Patent: May 3, 2022Assignee: Advanced Micro Devices, Inc.Inventor: Paul Moyer
-
Patent number: 11262942Abstract: The present disclosure relates to the field of solid-state data storage, and particularly to improving the speed performance and reducing the cost of solid-state data storage devices. A host-managed data storage system according to embodiments includes a set of storage devices, each storage device including a write buffer and memory; and a host coupled to the set of storage devices, the host including: a storage device management module for managing data storage functions for each storage device; memory including: a front-end write buffer; a first mapping table for data stored in the front-end write buffer; and a second mapping table for data stored in the memory of each storage device.Type: GrantFiled: July 12, 2019Date of Patent: March 1, 2022Assignee: SCALEFLUX, INC.Inventors: Qi Wu, Wentao Wu, Thad Omura, Yang Liu, Tong Zhang
-
Patent number: 11157303Abstract: A processor may include a register to store a bus-lock-disable bit and an execution unit to execute instructions. The execution unit may receive an instruction that includes a memory access request. The execution may further determine that the memory access request requires acquiring a bus lock, and, responsive to detecting that the bus-lock-disable bit indicates that bus locks are disabled, signal a fault to an operating system.Type: GrantFiled: August 29, 2019Date of Patent: October 26, 2021Assignee: Intel CorporationInventors: Vedvyas Shanbhogue, Gilbert Neiger, Arumugam Thiyagarajah
-
Patent number: 11132143Abstract: A storage device includes a nonvolatile memory device that includes a plurality of memory blocks, and a controller that uses some memory blocks of the plurality of memory blocks as a buffer area. Memory blocks storing invalid data from among the some memory blocks are invalid memory blocks, and the controller identifies memory blocks, of which an elapsed time after erase is greater than a reuse time, from among the invalid memory blocks as an available buffer size, and provides the available buffer size to an external host device.Type: GrantFiled: September 20, 2019Date of Patent: September 28, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Young Joon Jang, Chun-Um Kong, Ohchul Kwon, Junki Kim, Hyung-Kyun Byun
-
Patent number: 11068538Abstract: Techniques herein are for navigation data structures for graph traversal. In an embodiment, navigation data structures that a computer stores include: a source vertex array of vertices; a neighbor array of dense identifiers of target vertices terminating edges; a bidirectional map associating, for each vertex, a sparse identifier of the vertex with a dense identifier of the vertex; and a vertex array containing, when a dense identifier of a source vertex is used as an offset, a pair of offsets defining an offset range, for use with the neighbor array. The source vertex array, using the dense identifier of a particular vertex as an offset, contains an offset, into a neighbor array, of a target vertex terminating an edge originating at the particular vertex. The neighbor array contiguously stores dense identifiers of target vertices terminating edges originating from a same source vertex.Type: GrantFiled: February 1, 2019Date of Patent: July 20, 2021Assignee: Oracle International CorporationInventors: Michael Haubenschild, Sungpack Hong, Hassan Chafi, Korbinian Schmid, Martin Sevenich, Alexander Weld
-
Patent number: 10976935Abstract: A method and apparatus for assigning an allocated workload in a data center having multiple storage systems includes selecting one or more storage systems to be assigned the allocated workload based on a combination of performance impact scores and deployment scores. By considering both performance impact and deployment effort, the allocated workload is able to be assigned with a view not only toward storage system performance, but also with a view toward how deployment on a particular storage system would comply with data center policies and the amount of configuration effort it would take to enable the workload to be implemented on the target storage system. This enables workloads to be allocated within the data center while minimizing the required amount of configuration or reconfiguration required to implement the workload allocation within the data center.Type: GrantFiled: February 11, 2020Date of Patent: April 13, 2021Assignee: EMC IP Holding Company LLCInventors: Jason McCarthy, Girish Warrier, Rongnong Zhou
-
Patent number: 10924448Abstract: A method for retrieving content on a network comprising a first device and a second device is described. The method includes receiving in the network a request for content from the first device, the request identifying the content using an IPv6 address for the content, and determining whether the content is stored in a cache of the second device. Upon determining the content is stored in the cache of the second device, a request is sent to the second device for the content using the IPv6 address of the content. The content is forwarded to the first device from the second device, wherein the first and second devices are part of the same layer 2 domain. Methods of injecting content to a home network and packaging content are also described.Type: GrantFiled: April 17, 2017Date of Patent: February 16, 2021Assignee: CISCO TECHNOLOGY, INC.Inventors: David Ward, William Mark Townsley, Andre Surcouf
-
Patent number: 10895597Abstract: Systems, apparatuses, and methods for implementing debug features on a secure coprocessor to handle communication and computation between a debug tool and a debug target are disclosed. A debug tool generates a graphical user interface (GUI) to display debug information to a user for help in debugging a debug target such as a system on chip (SoC). A secure coprocessor is embedded on the debug target, and the secure coprocessor receives debug requests generated by the debug tool. The secure coprocessor performs various computation tasks and/or other operations to prevent multiple round-trip messages being sent back and forth between the debug tool and the debug target. The secure coprocessor is able to access system memory and determine a status of a processor being tested even when the processor becomes unresponsive.Type: GrantFiled: November 21, 2018Date of Patent: January 19, 2021Assignee: Advanced Micro Devices, Inc.Inventors: Tan Peng, Dong Zhu
-
Patent number: 10884924Abstract: A storage system receives a write request which specifies a logical volume address associated with a RAID group, and makes a first determination whether write target data in accordance with the write request exists in a cache memory. When the first determination result is negative, the storage system makes a second determination whether at least one of one or more conditions is met, the condition being that random write throughput performance is expected to increase by asynchronous de-staging processing of storing the write target data in the RAID group asynchronously to write processing performed in response to the write request. When the second determination result is negative, the storage system selects, for the write request, synchronous storage processing, which is processing of storing the write target data in the RAID group in the write processing and for which a load on a processor is lower than the asynchronous de-staging processing.Type: GrantFiled: March 4, 2015Date of Patent: January 5, 2021Assignee: HITACHI, LTD.Inventors: Shintaro Ito, Akira Yamamoto, Ryosuke Tatsumi, Takanobu Suzuki
-
Patent number: 10853522Abstract: A communications device has a first communications port via which secure messages are received, and a second communications port via which non-secure messages are received. In response to detecting that a secure message has been received, the device determines whether the second communications port is in a state that enables non-secure messages to be received. If the second communications port is in the enabled state, the device autonomously disables the second communications port to preclude non-secure messages received at that port from being processed.Type: GrantFiled: June 6, 2018Date of Patent: December 1, 2020Assignee: ITRON NETWORKED SOLUTIONS, INC.Inventors: Thomas Luecke, Nelson Bolyard, Winston Lew
-
Patent number: 10747675Abstract: Embodiments of the present disclosure generally relate to a method and device for managing caches. In particular, the method may include in response to receiving a request to write data to the cache, determining the amount of data to be written. The method may further include in response to the amount of the data exceeding a threshold amount, skipping writing data to the cache and writing the data to a lower level storage of the cache. Corresponding systems, apparatus and computer program products are also provided.Type: GrantFiled: September 20, 2017Date of Patent: August 18, 2020Assignee: EMC IP Holding Company LLCInventors: Lester Zhang, Denny Dengyu Wang, Chen Gong, Geng Han, Joe Liu, Leon Zhang
-
Patent number: 10725688Abstract: A memory system includes a memory controller, a first memory module including first and second groups of first memory chips, a second memory module including first and second groups of second memory chips, and a channel including a first group of signal lines suitable for coupling the memory controller with the first memory module, and a second group of signal lines suitable for coupling the memory controller with the second memory module.Type: GrantFiled: April 3, 2018Date of Patent: July 28, 2020Assignee: SK hynix Inc.Inventors: Jae-Han Park, Hyun-Woo Kwack
-
Patent number: 10509587Abstract: A coordination point for assigning clients to remote backup storages includes a persistent storage and a processor. The persistent storage stores gateway pool cache capacities of the remote backup storages. The processor obtains a data storage request for data from a client of the clients; obtains an indirect cache estimate for servicing the data storage request; selects a remote backup storage of the remote backup storages based on the obtained indirect cache estimate using the gateway pool cache capacities; and assign the selected remote backup storage to service the data storage request. The selected remote backup storage has a higher client load at a time selection than a second client load of a second remote backup storage of the remote backup storages.Type: GrantFiled: April 24, 2018Date of Patent: December 17, 2019Assignee: EMC IP Holding Company LLCInventors: Shelesh Chopra, Gururaj Kulkarni
-
Patent number: 10474375Abstract: An integrated circuit includes a processor to execute instructions and to interact with memory, and acceleration hardware, to execute a sub-program corresponding to instructions. A set of input queues includes a store address queue to receive, from the acceleration hardware, a first address of the memory, the first address associated with a store operation and a store data queue to receive, from the acceleration hardware, first data to be stored at the first address of the memory. The set of input queues also includes a completion queue to buffer response data for a load operation. A disambiguator circuit, coupled to the set of input queues and the memory, is to, responsive to determining the load operation, which succeeds the store operation, has an address conflict with the first address, copy the first data from the store data queue into the completion queue for the load operation.Type: GrantFiled: December 30, 2016Date of Patent: November 12, 2019Assignee: Intel CorporationInventors: Kermin Elliott Fleming, Jr., Simon C. Steely, Jr., Kent D. Glossop
-
Patent number: 10474615Abstract: A hub including a first connection interface, a second connection interface, and a signal bypass circuit is provided. The first connection interface has a first pin to receive a first connection message. The second connection interface has a second pin to transmit the first connection message. The signal bypass circuit is coupled to the first pin and the second pin to decide whether to bypass the first pin and the second pin based on the first connection message.Type: GrantFiled: January 10, 2018Date of Patent: November 12, 2019Assignee: Nuvoton Technology CorporationInventors: Shih-Hsuan Yen, Chao-Chiuan Hsu
-
Patent number: 10467138Abstract: A processing system includes a first socket, a second socket, and an interface between the first socket and the second socket. A first memory is associated with the first socket and a second memory is associated with the second socket. The processing system also includes a controller for the first memory. The controller is to receive a first request for a first memory transaction with the second memory and perform the first memory transaction along a path that includes the interface and bypasses at least one second cache associated with the second memory.Type: GrantFiled: December 28, 2015Date of Patent: November 5, 2019Assignee: Advanced Micro Devices, Inc.Inventors: Paul Blinzer, Ali Ibrahim, Benjamin T. Sander, Vydhyanathan Kalyanasundharam
-
Patent number: 10402327Abstract: A non-uniform memory access system includes several nodes that each have one or more processors, caches, local main memory, and a local bus that connects a node's processor(s) to its memory. The nodes are coupled to one another over a collection of point-to-point interconnects, thereby permitting processors in one node to access data stored in another node. Memory access time for remote memory takes longer than local memory because remote memory accesses have to travel across a communications network to arrive at the requesting processor. In some embodiments, inter-cache and main-memory-to-cache latencies are measured to determine whether it would be more efficient to satisfy memory access requests using cached copies stored in caches of owning nodes or from main memory of home nodes.Type: GrantFiled: November 22, 2016Date of Patent: September 3, 2019Assignee: Advanced Micro Devices, Inc.Inventors: David A. Roberts, Ehsan Fatehi
-
Patent number: 10402218Abstract: A processor may include a register to store a bus-lock-disable bit and an execution unit to execute instructions. The execution unit may receive an instruction that includes a memory access request. The execution may further determine that the memory access request requires acquiring a bus lock, and, responsive to detecting that the bus-lock-disable bit indicates that bus locks are disabled, signal a fault to an operating system.Type: GrantFiled: August 30, 2016Date of Patent: September 3, 2019Assignee: Intel CorporationInventors: Vedvyas Shanbhogue, Gilbert Neiger, Arumugam Thiyagarajah