Parallel Caches Patents (Class 711/120)
-
Patent number: 12242856Abstract: A data processor comprising an execution engine 51 for executing programs for execution threads and one or more caches 48, 49 operable to store data values for use when executing program instructions to perform processing operations for execution threads. The data processor further comprises a thread throttling control unit 54 configured to monitor the operation of the caches 48, 49 during execution of programs for execution threads, and to control the issuing of instructions for execution threads to the execution engine for executing a program based on the monitoring of the operation of the caches during execution of the program.Type: GrantFiled: March 24, 2022Date of Patent: March 4, 2025Assignee: Arm LimitedInventors: Tord Kvestad Øygard, Olof Henrik Uhrenholt, Andreas Due Engh-Halstvedt
-
Patent number: 12242891Abstract: One example method includes determining that local resources at an edge site are inadequate to support performance of a function needed by software running on the edge site, invoking a client agent, in response to invoking the client agent, receiving an execution manifest, determining, by the client agent, where to execute the function, wherein the determining comprises identifying a target execution environment for the function and the determining is based in part on information contained in the execution manifest, and transmitting, by the client agent, the execution manifest to a server agent of the target execution environment, and the execution manifest facilitates execution of the function in the target execution environment.Type: GrantFiled: July 22, 2021Date of Patent: March 4, 2025Assignee: EMC IP Holding Company LLCInventors: Amy N. Seibel, Victor Fong, Eric Bruno
-
Patent number: 12093717Abstract: A system and method include classifying and assigning virtual disks accessed from compute only nodes. The method determines, by a management processor of a virtual computing system, characteristics for a plurality of virtual disks hosted on a plurality of hyper converged nodes in a cluster of nodes in the virtual computing system. The method further classifies, by the management processor, each of the plurality of virtual disks based on the determined characteristics and identifies, by the management processor, one of the plurality of virtual disks to host data for a virtual machine on a compute only node based on the classification to spread out input-output demand in the cluster, reducing probability of input-output bottlenecks and increasing cluster-wide storage throughput. The method also assigns, by the management processor, the identified virtual disk to host data for the virtual machine located on the compute only node.Type: GrantFiled: October 13, 2021Date of Patent: September 17, 2024Assignee: Nutanix, Inc.Inventors: Aditya Ramesh, Ashwin Thennaram Vakkayil, Gaurav Poothia, Gokul Kannan, Hemanth Kumar Mantri, Kamalneet Singh, Robert Schwenz
-
Patent number: 12072807Abstract: Disclosed is a dynamic random access memory that has columns, data rows, tag rows and comparators. Each comparator compares address bits and tag information bits from the tag rows to determine a cache hit and generate address bits to access data information in the DRAM as a multiway set associative cache.Type: GrantFiled: May 31, 2019Date of Patent: August 27, 2024Assignee: RAMBUS INC.Inventors: Thomas Vogelsang, Frederick A. Ware, Michael Raymond Miller, Collins Williams
-
Patent number: 12019547Abstract: A technical solution to the technical problem of how to improve dispatch throughput for memory-centric commands bypasses address checking for certain memory-centric commands. Implementations include using an Address Check Bypass (ACB) bit to specify whether address checking should be performed for a memory-centric command. ACB bit values are specified in memory-centric instructions, automatically specified by a process, such as a compiler, or by host hardware, such as dispatch hardware, based upon whether a memory-centric command explicitly references memory. Implementations include bypassing, i.e., not performing, address checking for memory-centric commands that do not access memory and also for memory-centric commands that do access memory, but that have the same physical address as a prior memory-centric command that explicitly accessed memory to ensure that any data in caches was flushed to memory and/or invalidated.Type: GrantFiled: July 27, 2021Date of Patent: June 25, 2024Inventors: Jagadish B. Kotra, John Kalamatianos, Gagandeep Panwar
-
Patent number: 12007894Abstract: A battery management apparatus according to an embodiment of the present disclosure includes a processor including a plurality of cores respectively provided with a cache memory and configured to set a core storing a record-target data in the cache memory thereof among the plurality of cores as a main core and set a core other than the main core among the plurality of cores as a sub core; and a main memory configured to store the record-target data by the main core, wherein the main core is configured to block an authority of the sub core to access the main memory while the record-target data is being recorded in the main memory, and endow an authority to access the main memory to the sub core after the record-target data is recorded in the main memory.Type: GrantFiled: September 17, 2020Date of Patent: June 11, 2024Assignee: LG ENERGY SOLUTION, LTD.Inventors: Jae-Yeon Choi, Jong-Shik Baek
-
Patent number: 11995463Abstract: A system to support a machine learning (ML) operation comprises an array-based inference engine comprising a plurality of processing tiles each comprising at least one or more of an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform one or more computation tasks on the data in the OCM by executing a set of task instructions. The system also comprises a data streaming engine configured to stream data between a memory and the OCMs and an instruction streaming engine configured to distribute said set of task instructions to the corresponding processing tiles to control their operations and to synchronize said set of task instructions to be executed by each processing tile, respectively, to wait current certain task at each processing tile to finish before starting a new one.Type: GrantFiled: April 22, 2021Date of Patent: May 28, 2024Assignee: Marvell Asia Pte LtdInventors: Avinash Sodani, Senad Durakovic, Gopal Nalamalapu
-
Patent number: 11966590Abstract: A persistent memory device is disclosed. The persistent memory device may include a cache coherent interconnect interface. The persistent memory device may include a volatile storage and a non-volatile storage. The volatile storage may include at least a first area and a second area. A backup power source may be configured to provide backup power selectively to the second area of the volatile storage. A controller may control the volatile storage and the non-volatile storage. The persistent memory device may use the backup power source while transferring a data from the second area of the volatile storage to the non-volatile storage based at least in part on a loss of a primary power for the persistent memory device.Type: GrantFiled: July 5, 2022Date of Patent: April 23, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yang Seok Ki, Chanik Park, Sungwook Ryu
-
Patent number: 11960749Abstract: A host of a storage system is coupled to multiple SSDs. Each SSD is configured with a migration cache, and each SSD corresponds to one piece of access information. The host obtains migration data information of to-be-migrated data in a source SSD, determines a target SSD, and sends a read instruction carrying information about to-be-migrated data and the target SSD to the source SSD. The source SSD reads a data block according to the read instruction from a flash memory of the source SSD into a migration cache of the target SSD. After a read instruction is completed by the SSD, the host sends a write instruction to the target SSD to instruct the target SSD to write the data block in the cache of the target SSD to a flash memory of the target SSD.Type: GrantFiled: May 8, 2023Date of Patent: April 16, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Ge Du, Yu Hu, Jiancen Hou
-
Patent number: 11934863Abstract: A system to support a machine learning (ML) operation comprises an array-based inference engine comprising a plurality of processing tiles each comprising at least one or more of an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform one or more computation tasks on the data in the OCM by executing a set of task instructions. The system also comprises a data streaming engine configured to stream data between a memory and the OCMs and an instruction streaming engine configured to distribute said set of task instructions to the corresponding processing tiles to control their operations and to synchronize said set of task instructions to be executed by each processing tile, respectively, to wait current certain task at each processing tile to finish before starting a new one.Type: GrantFiled: April 22, 2021Date of Patent: March 19, 2024Assignee: Marvell Asia Pte LtdInventors: Avinash Sodani, Senad Durakovic, Gopal Nalamalapu
-
Patent number: 11868627Abstract: A method for operating a processing unit. The processing unit addresses virtual memory areas in order to access a RAM memory unit and these individual virtual memory areas respectively being mapped onto a physical memory area of the RAM memory unit. A check of the RAM memory unit for errors is performed. If, in the course of this check, a physical memory area of the RAM memory unit is determined to be faulty, this faulty physical memory area is designated as faulty. A check is performed to determine whether a free physical memory area exists in RAM memory unit onto which no virtual memory area is mapped and which is not designated as faulty. If such a free physical memory area exists, the virtual memory area that is currently mapped onto the physical memory area recognized as faulty is henceforth mapped onto this free physical memory area.Type: GrantFiled: March 30, 2021Date of Patent: January 9, 2024Assignee: ROBERT BOSCH GMBHInventors: Jens Breitbart, Sebastian Hoffmann
-
Patent number: 11848817Abstract: Techniques discussed herein relate to updating an edge device (e.g., a computing device distinct from and operating remotely with respect to a data center). The edge device can execute a first operating system (OS). A manifest specifying files of a second OS to be provisioned to the edge device may be obtained. The manifest may further specify a set of services to be provisioned at the edge device. One or more data files corresponding to a difference between a first set of data files associated with the first OS and a second set of data files associated with the second OS may be identified. A snapshot of the first OS may be generated, modified, and stored in memory of the edge device to configure the edge device with the second OS. The booting order of the edge device may be modified to boot utilizing the second OS.Type: GrantFiled: January 31, 2022Date of Patent: December 19, 2023Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Jonathon David Nelson, David Dale Becker
-
Patent number: 11783032Abstract: Disclosed herein are systems and methods for identifying and mitigating Flush-based cache attacks. The systems and methods can include adding a zombie bit to a cache line. The zombie bit can be used to track the status of cache hits and misses to the flushed line. A line that is invalidated due to a Flush-Caused Invalidation can be marked as a zombie line by marking the zombie bit as valid. If another hit, or access request, is made to the cache line, data retrieved from memory can be analyzed to determine if the hit is benign or is a potential attack. If the retrieved data is the same as the cache data, then the line can be marked as a valid zombie line. Any subsequent hit to the valid zombie line can be marked as a potential attack. Hardware- and software-based mitigation protocols are also described.Type: GrantFiled: September 17, 2019Date of Patent: October 10, 2023Assignee: Georgia Tech Research CorporationInventor: Moinuddin Qureshi
-
Patent number: 11782848Abstract: Systems, apparatuses, and methods for implementing a speculative probe mechanism are disclosed. A system includes at least multiple processing nodes, a probe filter, and a coherent slave. The coherent slave includes an early probe cache to cache recent lookups to the probe filter. The early probe cache includes entries for regions of memory, wherein a region includes a plurality of cache lines. The coherent slave performs parallel lookups to the probe filter and the early probe cache responsive to receiving a memory request. An early probe is sent to a first processing node responsive to determining that a lookup to the early probe cache hits on a first entry identifying the first processing node as an owner of a first region targeted by the memory request and responsive to determining that a confidence indicator of the first entry is greater than a threshold.Type: GrantFiled: September 14, 2020Date of Patent: October 10, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Amit P. Apte, Ganesh Balakrishnan, Vydhyanathan Kalyanasundharam, Kevin M. Lepak
-
Patent number: 11768773Abstract: Provided are I/O request type specific cache directories in accordance with the present description. In one embodiment, by limiting track entries of a cache directory to a specific I/O request type, the size of the cache directory may be reduced as compared to general cache directories for I/O requests of all types, for example. As a result, look-up operations directed to such smaller size I/O request type specific cache directories may be completed in each directory more quickly. In addition, look-ups may frequently be successfully completed after a look-up of a single I/O request type specific cache directory, improving the speed of cache look-ups and providing a significant improvement in system performance. Other aspects and advantages are provided, depending upon the particular application.Type: GrantFiled: March 3, 2020Date of Patent: September 26, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gail Spear, Lokesh Mohan Gupta, Kevin J. Ash, Kyler A. Anderson
-
Patent number: 11601376Abstract: Systems, methods, apparatuses, and computer readable media may be configured for transferring of state data of a network connection established by a first device. In an example, a front end device of a cache cluster may establish a network connection with a client device and generate state data associated with the network connection. The front end device may receive a content request from the client device via the network connection and select one of a plurality of back end devices to provide the content item.Type: GrantFiled: March 14, 2013Date of Patent: March 7, 2023Assignee: Comcast Cable Communications, LLCInventors: Kevin Johns, Allen Broome, Eric Rosenfeld, Richard Fliam
-
Patent number: 11520524Abstract: Devices and techniques for host adaptive memory device optimization are provided. A memory device can maintain a host model of interactions with a host. A set of commands from the host can be evaluated to create a profile of the set of commands. The profile can be compared to the host model to determine an inconsistency between the profile and the host model. An operation of the memory device can then be modified based on the inconsistency.Type: GrantFiled: January 25, 2021Date of Patent: December 6, 2022Assignee: Micron Technology, Inc.Inventors: Nadav Grosz, David Aaron Palmer
-
Patent number: 11494078Abstract: Examples of the present disclosure provide apparatuses and methods related to a translation lookaside buffer in memory. An example method comprises receiving a command including a virtual address from a host translating the virtual address to a physical address on volatile memory of a memory device using a translation lookaside buffer (TLB).Type: GrantFiled: December 11, 2020Date of Patent: November 8, 2022Assignee: Micron Technology, Inc.Inventors: John D. Leidel, Richard C. Murphy
-
Patent number: 11436106Abstract: A efficient method for long-term retention backup policy within recovery point objectives (RPO). Specifically, the disclosed method proposes a dynamic promotion scheme through which short-term retention backup copies, in compliance with specified long-term retention RPOs, may be promoted to render long-term retention backup copies. Further, the disclosed method not only looks to past and/or presently dated short-term retention backup copies, but also looks to prospective (or future) dated short-term retention backup copies, which are expected or predicted to be produced, for promotion. Moreover, in circumstances where there are no appropriate past, present, or future dated short-term retention backup copies to promote, the disclosed method triggers new backup operations to acquire the long-term retention backup copies necessary to maintain the specified long-retention RPOs.Type: GrantFiled: March 5, 2021Date of Patent: September 6, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Mengze Liao, Scott Randolph Quesnelle, Jinru Yan, Xiaoliang Zhu, Xiaolei Hu
-
Patent number: 11379236Abstract: An apparatus and method for hybrid software-hardware coherency.Type: GrantFiled: December 27, 2019Date of Patent: July 5, 2022Assignee: Intel CorporationInventors: Pratik Marolia, Rajesh Sankaran
-
Patent number: 11354208Abstract: A first non-volatile dual in-line memory module (NVDIMM) of a first server and a second NVDIMM of a second server are armed during initial program load in a dual-server based storage system to configure the first NVDIMM and the second NVDIMM to retain data on power loss. Prior to initiating a safe data commit scan to destage modified data from the first server to a secondary storage, a determination is made as to whether the first NVDIMM is armed. In response to determining that the first NVDIMM is not armed, a failover is initiated to the second server.Type: GrantFiled: September 11, 2019Date of Patent: June 7, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthew G. Borlick, Sean Patrick Riley, Brian Anthony Rinaldi, Trung N. Nguyen, Lokesh M. Gupta
-
Patent number: 11327767Abstract: Embodiments of dynamically increasing the resources for a partition to compensate for an input/output (I/O) recovery event are provided. An aspect includes allocating a first set of resources to a partition that is hosted on a data processing system. Another aspect includes operating the partition on the data processing system using the first set of resources. Another aspect includes, based on detection of an input/output (I/O) recovery event associated with operation of the partition, determining a compensation for the I/O recovery event. Another aspect includes allocating a second set of resources in addition to the first set of resources to the partition, the second set of resources corresponding to the compensation for the I/O recovery event. Another aspect includes operating the partition on the data processing system using the first set of resources and the second set of resources.Type: GrantFiled: April 5, 2019Date of Patent: May 10, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Scott B. Compton, Peter Sutton, Harry M Yudenfriend, Dale F Riedy
-
Patent number: 11290565Abstract: Requests for data can be distributed among servers based on indicators of intent to access the data. For example, a kernel of a client device can receive a message from a software application. The message can indicate that the software application intends to access data at a future point in time. The kernel can transmit an electronic communication associated with the message to multiple servers. The kernel can receive a response to the electronic communication from a server of the multiple servers. Based on the response and prior to receiving a future request for the data from the software application, the kernel can select the server from among the multiple servers as a destination for the future request for the data.Type: GrantFiled: August 12, 2020Date of Patent: March 29, 2022Assignee: Red Hat, Inc.Inventors: Jay Vyas, Huamin Chen
-
Patent number: 11288134Abstract: An apparatus comprises a processing device configured to identify a snapshot lineage comprising (i) a local snapshot lineage stored on a storage system and (ii) a cloud snapshot lineage stored on cloud storage of at least one cloud external to the storage system. The processing device is also configured to select a snapshot to be copied from the local snapshot lineage to the cloud snapshot lineage, and to copy the selected snapshot by copying data stored in the local snapshot lineage to a checkpointing cache and, responsive to determining that the copied data in the checkpointing cache has reached a specified checkpoint size, moving the copied data from the checkpointing cache to the cloud storage. The processing device is further configured to maintain, in the checkpointing cache, checkpointing information utilizable for pausing and resuming copying of the selected snapshot from the local snapshot lineage to the cloud snapshot lineage.Type: GrantFiled: March 10, 2020Date of Patent: March 29, 2022Assignee: EMC IP Holding Company LLCInventors: Mithun Mahendra Varma, Shanmuga Anand Gunasekaran
-
Patent number: 11281382Abstract: According to one embodiment, a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric. Each memory object can be created natively within the memory module, accessed using a single memory reference instruction without Input/Output (I/O) instructions, and managed by the memory module at a single memory layer. The object memory fabric can utilize a memory fabric protocol between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes to distribute and track the memory objects across the object memory fabric. The memory fabric protocol can be utilized across a dedicated link or across a shared link between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes.Type: GrantFiled: August 18, 2020Date of Patent: March 22, 2022Assignee: Ultrata, LLCInventors: Steven J. Frank, Larry Reback
-
Patent number: 11271860Abstract: An example cache-coherent packetized network system includes: a home agent; a snooped agent; and a request agent configured to send, to the home agent, a request message for a first address, the request message having a first transaction identifier of the request agent; where the home agent is configured to send, to the snooped agent, a snoop request message for the first address, the snoop request message having a second transaction identifier of the home agent; and where the snooped agent is configured to send a data message to the request agent, the data message including a first compressed tag generated using a function based on the first address.Type: GrantFiled: November 15, 2019Date of Patent: March 8, 2022Assignee: XILINX, INC.Inventors: Millind Mittal, Jaideep Dastidar
-
Patent number: 11182373Abstract: Provided are a computer program product, system, and method for updating change information for current copy relationships when establishing a new copy relationship having overlapping data with the current copy relationships. A first copy relationship indicates changed first source data to copy to first target data. An establish request is processed to create a second copy relationship to copy second source data in to second target data. A second copy relationship is generated, in response to the establish request, indicating data in the second source data to copy to the second target data. A determination is made of overlapping data units in the first source data also in the second target data. Indication is made in the first copy relationship to copy the overlapping data units. The first source data indicated in the first copy relationship is copied to the first target data, including data for the overlapping data units.Type: GrantFiled: September 24, 2019Date of Patent: November 23, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Theresa M. Brown, Nedlaya Y. Francisco, Suguang Li, Mark L. Lipets, Gregory E. McBride, Carol S. Mellgren, Raul E. Saba
-
Storage devices, storage systems including storage devices, and methods of accessing storage devices
Patent number: 11182105Abstract: A storage device may include a first storage area, a second storage area, and a controller. The controller may be configured to provide device information containing information on the first and second storage areas to an external host device, to allow a first access type of the external host device to the first storage area, and to allow a second access type of the external host device to the second storage area.Type: GrantFiled: March 21, 2019Date of Patent: November 23, 2021Inventors: SeokHeon Lee, Won-Gi Hong, Youngmin Lee -
Patent number: 11151033Abstract: A processor includes a plurality of cache memories, and a plurality of processor cores, each associated with one of the cache memories. Each of at least some of the cache memories is associated with information indicating whether data stored in the cache memory is shared among multiple processor cores.Type: GrantFiled: March 13, 2014Date of Patent: October 19, 2021Assignee: Tilera CorporationInventors: David M. Wentzlaff, Matthew Mattina, Anant Agarwal
-
Patent number: 11138125Abstract: A method for controlling a cache comprising receiving a request for data and determining whether the requested data is present in a first portion of the cache, a second portion of cache, or not in the cache. If the requested data is not located in the MRU portion of the cache, moving the data into the first portion of the cache.Type: GrantFiled: September 12, 2017Date of Patent: October 5, 2021Assignee: Taiwan Semiconductor Manufacturing Company LimitedInventor: Shih-Lien Linus Lu
-
Patent number: 11138178Abstract: A device such as a data storage system comprises a non-transitory memory storage comprising instructions, and one or more processors in communication with the memory. The one or more processors execute the instructions to: map a different portion of data in a storage device to each of different caches, wherein each cache is in a computing node with a processor; change a number of the computing nodes; provide a modified mapping in response to the change; and pass queries to the computing nodes. The computing nodes can continue to operate uninterrupted while the number of computing nodes is changed. Data transfer between the nodes can also be avoided.Type: GrantFiled: November 10, 2016Date of Patent: October 5, 2021Assignee: Futurewei Technologies, Inc.Inventor: Kamini Jagtiani
-
Patent number: 11126564Abstract: Some examples described herein provide for a partially coherent memory transfer. An example method includes moving data directly from a coherence domain of an originating symmetric multiprocessor (SMP) node across a memory fabric to a target location for the data within a coherence domain of a receiving SMP node.Type: GrantFiled: January 12, 2016Date of Patent: September 21, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Mike Schlansker, Jean Tourrilhes
-
Patent number: 11068399Abstract: Technologies for enforcing coherence ordering in consumer polling interactions include a network interface controller (NIC) of a target computing device which is configured to receive a network packet, write the payload of the network packet to a data storage device of the target computing device, and obtain, subsequent to having transmitted a last write request to write the payload to the data storage device, ownership of a flag cache line of a cache of the target computing device. The NIC is additionally configured to receive a snoop request from a processor of the target computing device, identify whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC, and hold the received snoop request for delayed return in response to having identified the received snoop request as the read flag snoop request. Other embodiments are described herein.Type: GrantFiled: September 29, 2017Date of Patent: July 20, 2021Assignee: Intel CorporationInventors: Bin Li, Chunhui Zhang, Ren Wang, Ram Huggahalli
-
Patent number: 11068172Abstract: Accessing data using a first storage device and a second storage device that is a synchronous mirror of the first storage device includes determining if the first and second storage devices support alternative mirroring that bypasses having the first storage device write data to the second storage device and choosing to write data to the first storage device only or both the first and second storage device based on criteria that includes metrics relating to timing, an identity of a calling process or application, a size of an I/O operation, an identity of a destination volume, a time of day, a particular host id, a particular application or set of applications, and/or particular datasets, extents, tracks, records/blocks. A single I/O operation may be bifurcated to provide a portion of the I/O operation to only the first storage device or to both the first storage device and the second storage device.Type: GrantFiled: September 30, 2015Date of Patent: July 20, 2021Assignee: EMC IP Holding Company LLCInventors: Douglas E. LeCrone, Paul A. Linstead, Brett A. Quinn
-
Patent number: 11003582Abstract: An embodiment of a semiconductor apparatus may include technology to determine workload-related information for a persistent storage media and a cache memory, and aggregate a bandwidth of the persistent storage media and the cache memory based on the determined workload information. Other embodiments are disclosed and claimed.Type: GrantFiled: September 27, 2018Date of Patent: May 11, 2021Assignee: Intel CorporationInventors: Chace Clark, Francis Corrado
-
Patent number: 10979279Abstract: Method of clock synchronization in cloud computing. A plurality of physical computer assets are provided. The plurality of physical computer assets are linked together to form a virtualized computing cloud, the virtualized computing cloud having a centralized clock for coordinating the operation of the virtualized computing cloud; the virtualized computing cloud is logically partitioned into a plurality of virtualized logical server clouds, each of the virtualized logical server clouds having a local clock synchronized to the same centralized clock; a clock type from a clock palette is selected for at least one of the virtualized logical server clouds; the clock type is implemented in the at least one of the virtualized logical server clouds such that the clock type is synchronized to the at least one of the virtualized logical server clouds; and the centralized clock is disabled. The method may be performed on one or more computing devices.Type: GrantFiled: July 3, 2014Date of Patent: April 13, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chandrashekhar G. Deshpande, Shankar S. Kalyana, Jigneshkumar K. Karia, Gandhi Sivakumar
-
Patent number: 10972142Abstract: Wireless networking transceiver circuitry for an integrated circuit device includes a plurality of wireless networking transceiver subsystems, each subsystem including respective processing circuitry configurable for coupling to radio circuitry to implement a respective set of protocol features selected from at least one overall set of protocol features. Memory circuitry is provided, sufficient to support a respective set of protocol features in each subsystem when at least one respective set of protocol features is smaller than the overall set of protocol features. Memory-sharing circuitry is provided, configurable to couple respective portions of the memory circuitry to the processing circuitry of respective subsystems. The memory circuitry and the memory-sharing circuitry may be outside the subsystems, or distributed within the subsystems. The memory may be 60% of an amount of memory sufficient to support the overall set of protocol features in all subsystems.Type: GrantFiled: December 5, 2019Date of Patent: April 6, 2021Assignee: NXP USA, Inc.Inventors: Timothy J. Donovan, Yui Lin, Lite Lo, Zhengqiang Huang
-
Patent number: 10949235Abstract: Disclosed are mechanisms to support integrating network semantics into communications between processor cores operating on the same server hardware. A network communications unit is implemented in a coherent domain with the processor cores. The network communications unit may be implemented on the CPU package, in one or more of the processor cores, and/or coupled via the coherent fabric. The processor cores and/or associated virtual entities communicate by transmitting packet headers via the network communications unit. When communicating locally compressed headers may be employed. The headers may omit specified fields and employ simplified addressing schemes for increased communication speed. When communicating locally, data can be moved between memory locations and/or pointers can be communicated to reduce bandwidth needed to transfer data. The network communications unit may maintain/access a local policy table containing rules governing communications between entities and enforce such rules accordingly.Type: GrantFiled: December 12, 2016Date of Patent: March 16, 2021Assignee: Intel CorporationInventor: Uri Elzur
-
Patent number: 10942874Abstract: A method and system for managing command fetches by an Non-Volatile Memory express (NVMe) controller from a plurality of queues in a host maintains a predefined ratio of data throughput, based on the command fetches, between the plurality of queues. Each of the plurality of queues is assigned with a particular priority and weight.Type: GrantFiled: January 17, 2019Date of Patent: March 9, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Rajesh Kumar Sahoo, Aishwarya Ravichandran, Manoj Thapliyal
-
Patent number: 10929174Abstract: A distributed memory system including a plurality of chips, a plurality of nodes that are distributed across the plurality of chips such that each node is comprised within a chip, each node includes a dedicated local memory and a processor core, and each local memory is configured to be accessible over network communication, a network interface for each node, the network interface configured such that a corresponding network interface of each node is integrated in a coherence domain of the chip of the corresponding node, wherein each of the network interfaces are configured to support a one-sided operation, the network interface directly reading or writing in the dedicated local memory of the corresponding node without involving a processor core, and the one-sided operation is configured such that the processor core of a corresponding node uses a protocol to directly inject a remote memory access for read or write request to the network interface of the node, the remote memory access request allowing to readType: GrantFiled: December 12, 2017Date of Patent: February 23, 2021Assignee: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL)Inventors: Alexandros Daglis, Boris Robert Grot, Babak Falsafi
-
Patent number: 10922236Abstract: The present application discloses a cascade cache refreshing method, system, and device. The method in an embodiment of the present specification includes: determining a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.Type: GrantFiled: March 6, 2020Date of Patent: February 16, 2021Assignee: Advanced New Technologies Co., Ltd.Inventor: Yangyang Zhao
-
Patent number: 10901868Abstract: Embodiments described herein provide a mechanism to use an on-chip buffer memory in conjunction with an off-chip buffer memory for interim NAND write data storage. Specifically, the program data flows through the on-chip buffer memory to the NAND memory, while simultaneously a copy of the NAND program data is buffered in one or more circular buffer structures within the off-chip buffer memory.Type: GrantFiled: October 2, 2018Date of Patent: January 26, 2021Assignee: Marvell Asia Pte, Ltd.Inventors: William W. Dennin, III, Chengkuo Huang
-
Patent number: 10860482Abstract: Apparatuses and methods for providing data to a configurable storage area are described herein. An example apparatus may include an extended address register including a plurality of configuration bits indicative of an offset and a size, an array having a storage area, a size and offset of the storage area based, at least in part, on the plurality of configuration bits, and a buffer configured to store data, the data including data intended to be stored in the storage area. A memory control unit may be coupled to the buffer and configured to cause the buffer to store the data intended to be stored in the storage area in the storage area of the array responsive, at least in part, to a flush command.Type: GrantFiled: February 11, 2019Date of Patent: December 8, 2020Assignee: Micron Technology, Inc.Inventors: Graziano Mirichigni, Luca Porzio, Erminio Di Martino, Giacomo Bernardi, Domenico Monteleone, Stefano Zanardi, Chee Weng Tan, Sebastien LeMarie, Andre Klindworth
-
Patent number: 10860480Abstract: Embodiments of the present disclosure relate to a method and a device for cache management. The method includes: in response to receiving a write request for a cache logic unit, determining whether a first cache space of a plurality of cache spaces associated with the cache logic unit is locked; in response to the first cache space being locked, obtaining a second cache space from the plurality of cache spaces, the second cache space being different from the first cache space and being in an unlocked state; and performing, in the second cache space, the write request for the cache logic unit.Type: GrantFiled: June 28, 2018Date of Patent: December 8, 2020Assignee: EMC IP Holding Company LLCInventors: Lifeng Yang, Ruiyong Jia, Liam Xiongcheng Li, Hongpo Gao, Xinlei Xu
-
Patent number: 10846231Abstract: To prevent an excessive increase of a dirty data amount in a cache memory. A processor acquires storage device information from each of storage devices. When receiving a write request to a first storage device group from a higher-level apparatus, the processor determines whether a write destination cache area corresponding to a write destination address indicated by the write request is reserved. When determining that the write destination cache area is not reserved, the processor performs, on the basis of the storage device information and cache information, reservation determination for determining whether to reserve the write destination cache area. When determining to reserve the write destination cache area, the processor reserves the write destination cache area. When determining not to reserve the write destination cache area, the processor stands by for the reservation of the write destination cache area.Type: GrantFiled: November 13, 2015Date of Patent: November 24, 2020Assignee: HITACHI, LTD.Inventors: Natsuki Kusuno, Toshiya Seki, Tomohiro Nishimoto, Takaki Matsushita
-
Patent number: 10833704Abstract: Low-density parity check (LDPC) decoder circuitry is configured to decode an input codeword using a plurality of circulant matrices of a parity check matrix for an LDPC code. Multiple memory banks are configured to store elements of the input codeword. A memory circuit is configured for storage of an instruction sequence. Each instruction describes for one of the circulant matrices, a corresponding layer and column of the parity check matrix and a rotation. Each instruction includes packing factor bits having a value indicative of a number of instructions of the instruction sequence to be assembled in a bundle of instructions. A bundler circuit is configured to assemble the number of instructions from the memory circuit in a bundle. The bundler circuit specifies one or more no-operation codes (NOPs) in the bundle in response to the value of the packing factor bits and provides the bundle to the decoder circuitry.Type: GrantFiled: December 12, 2018Date of Patent: November 10, 2020Assignee: Xilinx, Inc.Inventors: Richard L. Walke, Andrew Dow, Zahid Khan
-
Patent number: 10809923Abstract: According to one embodiment, a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric. Each memory object can be created natively within the memory module, accessed using a single memory reference instruction without Input/Output (I/O) instructions, and managed by the memory module at a single memory layer. The object memory fabric can utilize a memory fabric protocol between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes to distribute and track the memory objects across the object memory fabric. The memory fabric protocol can be utilized across a dedicated link or across a shared link between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes.Type: GrantFiled: February 4, 2019Date of Patent: October 20, 2020Assignee: Ultrata, LLCInventors: Steven J. Frank, Larry Reback
-
Patent number: 10795820Abstract: Apparatus and a corresponding method of operating the apparatus, in a coherent interconnect system comprising a requesting master device and a data-storing slave device, are provided. The apparatus maintains records of coherency protocol transactions received from the requesting master device whilst completion of the coherency protocol transactions are pending and is responsive to reception of a read transaction from the requesting master device for a data item stored in the data-storing slave device to issue a direct memory transfer request to the data-storing slave device. A read acknowledgement trigger is added to the direct memory transfer request and in response to reception of a read acknowledgement signal from the data-storing slave device a record created by reception of the read transaction is updated corresponding to completion of the direct memory transfer request.Type: GrantFiled: February 8, 2017Date of Patent: October 6, 2020Assignee: ARM LimitedInventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Tushar P Ringe
-
Patent number: 10776266Abstract: Aspects of the present disclosure relate to an apparatus comprising a requester master processing device having an associated private cache storage to store data for access by the requester master processing device. The requester master processing device is arranged to issue a request to modify data that is associated with a given memory address and stored in a private cache storage associated with a recipient master processing device. The private cache storage associated with the recipient master processing device is arranged to store data for access by the recipient master processing device. The apparatus further comprises the recipient master processing device having its private cache storage. One of the recipient master processing device and its associated private cache storage is arranged to perform the requested modification of the data while the data is stored in the cache storage associated with the recipient master processing device.Type: GrantFiled: November 7, 2018Date of Patent: September 15, 2020Assignee: Arm LimitedInventors: Joshua Randall, Alejandro Rico Carro, Jose Alberto Joao, Richard William Earnshaw, Alasdair Grant
-
Patent number: 10771601Abstract: Requests for data can be distributed among servers based on indicators of intent to access the data. For example, a kernel of a client device can receive a message from a software application. The message can indicate that the software application intends to access data at a future point in time. The kernel can transmit an electronic communication associated with the message to multiple servers. The kernel can receive a response to the electronic communication from a server of the multiple servers. Based on the response and prior to receiving a future request for the data from the software application, the kernel can select the server from among the multiple servers as a destination for the future request for the data.Type: GrantFiled: May 15, 2017Date of Patent: September 8, 2020Assignee: Red Hat, Inc.Inventors: Jay Vyas, Huamin Chen