Shared Cache Patents (Class 711/130)
-
Patent number: 12182026Abstract: Techniques are disclosed relating to smashing atomic operations. In some embodiments, cache control circuitry caches data values in cache storage circuitry and receive multiple requests to atomically update a cached data value according to one or more arithmetic operations. The control circuitry may perform updates to a cached data value based on the multiple requests, in response to determining that the one or more arithmetic operations meet one or more criteria and store operation information that indicates a most-recent requested atomic arithmetic operation for the updated data value. The control circuitry may, in response to an event, flush, to a higher level in a memory hierarchy that includes the cache storage circuitry both: the updated data value and the operation information. This may advantageously smash atomic operations at the cache and reduce operations to the higher-level cache or memory (which may be the actual coherence point for atomic requests).Type: GrantFiled: June 27, 2023Date of Patent: December 31, 2024Assignee: Apple Inc.Inventors: Jedd O. Haberstro, Mladen Wilder
-
Patent number: 12056514Abstract: Methods, systems, and computer storage media for providing virtualization operations—including an activate operation, suspend operation, and resume operation for virtualization in a virtualization system. In operation, a unique identifier and file metadata associated with a first file stored in a cache engine. The cache engine manages the first file of an application running on the virtual machine to circumvent writing file data of the first file to an OS disk during a suspend operation of the virtual machine and circumvents reading file data of the first file from the OS disk during a resume operation of the virtual machine. Based on a resume operation associated with the virtual machine and the file metadata, file data of the first file that is stored in the cache engine is accessed. The file data is communicated to the virtual machine, the virtual machine is associated with the suspend and the resume operation.Type: GrantFiled: June 29, 2021Date of Patent: August 6, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Bijayalaxmi Nanda, Somesh Goel
-
Patent number: 12056083Abstract: The present disclosure relates to a mechanism for issuing instructions in a processor (e.g., a vector processor) implemented as an overlay on programmable hardware (e.g., a field programmable gate array (FPGA) device). Implementations described herein include features for optimizing resource availability on programmable hardware units and enabling superscalar execution when coupled with a temporal single-instruction multiple data (SIMD). Systems described herein involve an issue component of a processor controller (e.g., a vector processor controller) that enables fast and efficient instruction issue while verifying that structural and data hazards between instructions have been resolved.Type: GrantFiled: August 4, 2023Date of Patent: August 6, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Aaron Michael Landy, Skand Hurkat
-
Patent number: 12038841Abstract: Embodiments are for using a decentralized hot cache line tracking fairness mechanism. In response to receiving an incoming request to access a cache line, a determination is made to grant access to the cache line based on a requested state and a serviced state used for maintaining the cache line, a structure comprising the requested and serviced states. In response to the determination to grant access to the cache line, the requested state and the serviced state are transferred along with data of the cache line.Type: GrantFiled: April 5, 2022Date of Patent: July 16, 2024Assignee: International Business Machines CorporationInventors: Tu-An T. Nguyen, Matthias Klein, Gregory William Alexander, Jason D. Kohl, Winston Herring, Timothy Bronson, Christian Jacobi
-
Patent number: 12019610Abstract: Techniques are disclosed relating to truncating a tenant's data from a table. A database node may maintain a multi-tenant table having records for tenants. Maintaining the table may include writing a record for a tenant into an in-memory cache and performing a flush operation to flush the record to a shared storage. The database node may write a truncate record into the in-memory cache that truncates a tenant from the table such that records of the tenant having a timestamp indicating a time before the truncate record cannot be accessed as part of a record query. While the truncate record remains in the in-memory cache, the database node may receive a request to perform a record query for a key of the tenant, make a determination on whether a record was committed for the key after the truncate record was committed, and return a response based on the determination.Type: GrantFiled: August 27, 2021Date of Patent: June 25, 2024Assignee: Salesforce, Inc.Inventors: Vaibhav Arora, Terry Chong, Thomas Fanghaenel
-
Patent number: 12013780Abstract: Components on an IC chip may operate faster or provide higher performance relative to power consumption if allowed access to sufficient memory resources. If every component is provided its own memory, however, the chip becomes expensive. In described implementations, memory is shared between two or more components. For example, a processing component can include computational circuitry and a memory coupled thereto. A multi-component cache controller is coupled to the memory. Logic circuitry is coupled to the cache controller and the memory. The logic circuitry selectively separates the memory into multiple memory partitions. A first memory partition can be allocated to the computational circuitry and provide storage to the computational circuitry. A second memory partition can be allocated to the cache controller and provide storage to multiple components.Type: GrantFiled: August 19, 2020Date of Patent: June 18, 2024Assignee: Google LLCInventors: Suyog Gupta, Ravi Narayanaswami, Uday Kumar Dasari, Ali Iranli, Pavan Thirunagari, Vinu Vijay Kumar, Sunitha R. Kosireddy
-
Patent number: 11940915Abstract: A cache allocation method is provided. A core accesses a L3 cache when detecting a miss response from each of a L1 and a L2 cache accessed by the core through sending instruction fetching instructions configured to request L1 and L2 caches to return an instruction and data. The L1 cache is a private cache of the core, the L2 cache is a common cache corresponding to a core set including the core, the L3 cache is a common cache shared by core sets, and the miss response from the L2 cache carries network slice information. A planning unit in the L3 cache allocates the core sets to network slices, configures caches for the network slices according to the network slice information, and sends a hit response to the core. The hit response is configured to return data in a cache of a network slice corresponding to the core set.Type: GrantFiled: April 30, 2020Date of Patent: March 26, 2024Assignee: SANECHIPS TECHNOLOGY CO., LTD.Inventor: Xinwei Niu
-
Patent number: 11880306Abstract: Aspects disclosed in the detailed description include configuring a configurable combined private and shared cache in a processor. Related processor-based systems and methods are also disclosed. A combined private and shared cache structure is configurable to select a private cache portion and a shared cache portion.Type: GrantFiled: June 7, 2022Date of Patent: January 23, 2024Assignee: Ampere Computing LLCInventors: Richard James Shannon, Stephan Jean Jourdan, Matthew Robert Erler, Jared Eric Bendt
-
Patent number: 11875427Abstract: An electronic device may include an electronic display to display an image based on processed image data. The electronic device may also include image processing circuitry to generate the processed image data based on input image data and previously determined data stored in memory. The image processing circuitry may also operate according to real-time computing constraints. Cache memory may store the previously determined data in a provisioned section of the cache memory allotted to the image processing circuitry. Additionally, a controller may manage reading and writing of the previously determined data to the provisioned section of the cache memory.Type: GrantFiled: September 13, 2021Date of Patent: January 16, 2024Assignee: Apple Inc.Inventors: Rohit Natarajan, Christopher P. Tann, Rohit K. Gupta
-
Patent number: 11875187Abstract: A computer-implemented method at a data management system comprises: generating, with one or more processors, a containerized runtime in a memory in communication with the one or more processors; instantiating, with the one or more processors, an app in the runtime; receiving, with the one or more processors, a request from the app for data; retrieving, with the one or more processors, a copy of the requested data from a data source; and transmitting, with the one or more processors, the data to the containerized runtime for the app to operate on.Type: GrantFiled: March 6, 2020Date of Patent: January 16, 2024Assignee: Rubrik, Inc.Inventors: Abhay Mitra, Vijay Karthik, Vivek Sanjay Jain, Avishek Ganguli, Arohi Kumar, Kushaagra Goyal, Christopher Wong
-
Patent number: 11853216Abstract: Disclosed in some examples are methods, systems, and machine readable mediums that provide increased bandwidth caches to process requests more efficiently for more than a single address at a time. This increased bandwidth allows for multiple cache operations to be performed in parallel. In some examples, to achieve this bandwidth increase, multiple copies of the hit logic are used in conjunction with dividing the cache into two or more segments with each segment storing values from different addresses. In some examples, the hit logic may detect hits for each segment. That is, the hit logic does not correspond to a particular cache segment. Each address value may be serviced by any of the plurality of hit logic units.Type: GrantFiled: August 16, 2021Date of Patent: December 26, 2023Assignee: Micron Technology, Inc.Inventor: Bryan Hornung
-
Patent number: 11836087Abstract: The disclosed embodiments relate to per-process configuration caches in storage devices. A method is disclosed comprising initiating a new process, the new process associated with a process context; configuring a region in a memory device, the region associated with the process context, wherein the configuring comprises setting one or more cache parameters that modify operation of the memory device; and mapping the process context to the region of the memory device.Type: GrantFiled: December 23, 2020Date of Patent: December 5, 2023Assignee: Micron Technology, Inc.Inventor: Dmitri Yudanov
-
Patent number: 11829309Abstract: A data forwarding chip and a server are disclosed. The server includes a data forwarding chip, a network interface card, and a processor. The data forwarding chip is separately connected to the network interface card and the processor through a bus. After receiving a data forwarding request sent by the processor or the network interface card, the data forwarding chip forwards, based on a destination address of the data forwarding request through an endpoint port that is on the forwarding chip and that is directly connected to a memory space corresponding to the destination address of the data forwarding request, to-be-forwarded data specified in the data forwarding request, such that when the server sends or receives data, cross-chip transmission of data between processors occurs, thereby reducing a data transmission delay.Type: GrantFiled: March 18, 2022Date of Patent: November 28, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Mingjian Que, Junjie Wang, Tao Li
-
Patent number: 11803472Abstract: Integrated circuits (ICs) employ subsystem shared cache memory for facilitating extension of low-power island (LPI) memory. An LPI subsystem and primary subsystems access a memory subsystem on a first access interface in a first power mode and the LPI subsystem accesses the memory subsystem by a second access interface in the low power mode. In the first power mode, the primary subsystems and the LPI subsystem may send a subsystem memory access request including a virtual memory address to a subsystem memory interface of the memory subsystem to access either data stored in an external memory or a version of the data stored in a shared memory circuit. In the low-power mode, the LPI subsystem sends an LPI memory access request including a direct memory address to an LPI memory interface of the memory subsystem to access the shared memory circuit to extend the LPI memory.Type: GrantFiled: July 30, 2021Date of Patent: October 31, 2023Assignee: QUALCOMM IncorporatedInventors: Girish Bhat, Subbarao Palacharla, Jeffrey Shabel, Isaac Berk, Kedar Bhole, Vipul Gandhi, George Patsilaras, Sparsh Singhai
-
Patent number: 11775445Abstract: Disclosed herein is a virtual cache and method in a processor for supporting multiple threads on the same cache line. The processor is configured to support virtual memory and multiple threads. The virtual cache directory includes a plurality of directory entries, each entry is associated with a cache line. Each cache line has a corresponding tag. The tag includes a logical address, an address space identifier, a real address bit indicator, and a per thread validity bit for each thread that accesses the cache line. When a subsequent thread determines that the cache line is valid for that thread the validity bit for that thread is set, while not invalidating any validity bits for other threads.Type: GrantFiled: October 13, 2020Date of Patent: October 3, 2023Assignee: International Business Machines CorporationInventors: Markus Helms, Christian Jacobi, Ulrich Mayer, Martin Recktenwald, Johannes C. Reichart, Anthony Saporito, Aaron Tsai
-
Patent number: 11762774Abstract: An arithmetic processor including a plurality of core groups each including a plurality of cores and a cache unit, a plurality of home agents each including a tag directory and a store command queue and a store command queue. The store command queue enters the received store request to the entry queue in order of reception, the cache unit stores the data of the store request in a data RAM. The store command queue sets a data ownership acquisition flag of the store request to valid when obtaining a data ownership of the store request and issues a top-of-queue notification to the cache control unit when the flag of the top-of-queue entry is valid. In response to the top-of-queue notification, the cache unit update a cache tag to modified state and issue a store request completion notification.Type: GrantFiled: April 21, 2022Date of Patent: September 19, 2023Assignee: FUJITSU LIMITEDInventors: Junlu Chen, Toru Hikichi
-
Patent number: 11762770Abstract: One or more aspects of the present disclosure relate to cache memory management. In embodiments, a global memory of a storage array into one or more cache partitions based on an anticipated activity of one or more input/output (IO) service level (SL) workload volumes can be dynamically partitioned.Type: GrantFiled: October 22, 2020Date of Patent: September 19, 2023Assignee: EMC IP Holding Company LLCInventors: John Creed, John Krasner
-
Patent number: 11747981Abstract: A data access system has host computers having front-end controllers nFE_SAN connected via a bus or network interconnect to back-end storage controllers nBE_SAN, and physical disk drives connected via network interconnect to the nBE_SANs to provide a distributed, high performance, policy based or dynamically reconfigurable, centrally managed, data storage acceleration system. The hardware and software architectural solutions eliminate BE_SAN controller bottlenecks and improve performance and scalability. In an embodiment, the nBE_SAN (BE_SAN) firmware recognize controller overload conditions, informs Distributed Resource Manager (DRM), and, based on the DRM provided optimal topology information, delegates part of its workload to additional controllers. The nFE_SAN firmware and additional hardware using functionally independent and redundant CPUs and memory that mitigate single points of failure and accelerates write performance.Type: GrantFiled: March 15, 2021Date of Patent: September 5, 2023Inventor: Branislav Radovanovic
-
Patent number: 11729113Abstract: Embodiments of the disclosure provide techniques for partitioning a resource object into multiple resource components of a cluster of host computer nodes in a distributed resources system. The distributed resources system translates high-level policy requirements into a resource configuration that the system accommodates. The system determines an allocation based on the policy requirements and identifies resource configurations that are available. Upon selecting a resource configuration, the distributed resources system assigns the allocation and associated values to the selected configuration and publishes the new configuration to other host computer nodes in the cluster.Type: GrantFiled: May 13, 2021Date of Patent: August 15, 2023Assignee: VMware, Inc.Inventors: Christos Karamanolis, William Earl, Eric Knauft, Pascal Renauld
-
Patent number: 11687358Abstract: A virtualization platform for Network Functions Virtualization (NFV) is provided. The virtualization platform may include a host processor coupled to an acceleration coprocessor. The acceleration coprocessor may be a reconfigurable integrated circuit to help provide improved flexibility and agility for the NFV. The coprocessor may include multiple virtual function hardware acceleration modules each of which is configured to perform a respective accelerator function. A virtual machine running on the host processor may wish to perform multiple accelerator functions in succession at the coprocessor on a given data. In one suitable arrangement, intermediate data output by each of the accelerator functions may be fed back to the host processor. In another suitable arrangement, the successive function calls may be chained together so that only the final resulting data is fed back to the host processor.Type: GrantFiled: March 26, 2021Date of Patent: June 27, 2023Assignee: Altera CorporationInventors: Abdel Hafiz Rabi, Allen Chen, Mark Jonathan Lewis, Jiefan Zhang
-
Patent number: 11513988Abstract: A method, computer program product, and computing system for receiving, at a local node, a request to buffer data on a remote persistent cache memory system of a remote node. A target memory address within the remote persistent cache memory system may be sent from the local node via a remote procedure call (RPC). The data may be sent from the local node to the target memory address within the remote persistent cache memory system via a remote direct memory access (RDMA) command.Type: GrantFiled: July 21, 2021Date of Patent: November 29, 2022Assignee: EMC IP Holding Company, LLCInventors: Oran Baruch, Ronen Gazit, Jenny Derzhavetz, Yuri Chernyavsky
-
Patent number: 11397697Abstract: Apparatus, methods, and computer-readable storage media are disclosed for core-to-core communication between physical and/or virtual processor cores. In some examples of the disclosed technology, application cores write notification data (e.g., to doorbell or PCI configuration memory space accesses via a memory interface), without synchronizing with the other application cores or the service cores. In one examples of the disclosed technology, a message selection circuit is configured to, serialize data from the plurality of user cores by: receiving data from a user core, selecting one of the service cores to send the data based on a memory location addressed by the sending user core, and sending the received data to a respective message buffer dedicated to the selected service core.Type: GrantFiled: September 18, 2019Date of Patent: July 26, 2022Assignee: Amazon Technologies, Inc.Inventors: Leah Shalev, Adi Habusha, Georgy Machulsky, Nafea Bshara, Eric Jason Brandwine
-
Patent number: 11392555Abstract: A system for cloud-based file services, comprising: a plurality of single-tenant file system nodes configured to provide file system access to an object store via a plurality of multitenant storage nodes; the plurality of multitenant storage nodes sharing access to the object store; and one or more management nodes configured to provision resources for the plurality of single-tenant file system nodes and the plurality of multitenant storage nodes.Type: GrantFiled: April 29, 2020Date of Patent: July 19, 2022Assignee: Pure Storage, Inc.Inventors: Robert Lee, Igor Ostrovsky, Mark Emberson, Boris Feigin, Ronald Karr
-
Patent number: 11386005Abstract: Disclosed are a memory system, a memory controller, and a method of operating a memory system. The memory system may control the memory device to store data into zones of memory blocks in the memory device by assigning each data to be written with an address subsequent to a most recently written address in a zone, store journal information including mapping information between a logical address and a physical address for one of the one or more zones in a journal cache, search for journal information corresponding to a target zone targeted to write data when mapping information for the target zone among the one or more zones is updated, and replace the journal information corresponding to the target zone with journal information including the updated mapping information.Type: GrantFiled: January 22, 2021Date of Patent: July 12, 2022Assignee: SK HYNIX INC.Inventor: Chan Ho Ha
-
Patent number: 11379377Abstract: First and second-level caches are provided. Cache control circuitry performs a first-level cache lookup of the first-level cache based on a lookup address, to determine whether the first-level cache stores valid cached data corresponding to the lookup address. When lookup hint information associated with the lookup address is available, the cache control circuitry determines based on the lookup hint information whether to activate or deactivate a second-level cache lookup of the second-level cache. The lookup hint information is indicative of whether the second-level cache is predicted to store valid cached data associated with the lookup address. When the second-level cache lookup is activated, the second-level cache lookup of the second-level cache is performed based on the lookup address to determine whether the second-level cache stores valid cached data corresponding to the lookup address.Type: GrantFiled: October 6, 2020Date of Patent: July 5, 2022Assignee: Arm LimitedInventors: Yasuo Ishii, James David Dundas, Chang Joo Lee, Muhammed Umar Farooq
-
Patent number: 11379368Abstract: An apparatus includes a plurality of processor cores; a shared cache connected to the plurality of processor cores; a cache control unit connected to the shared cache; and a way allocation circuitry connected to at least one of the plurality of processor cores. The way allocation circuitry is external to the plurality of processor cores. The cache control unit and the way allocation circuitry are cooperatively configured to process an intercepted memory request with respect to designated ways in the shared cache, the designated ways being based on a partition identifier and a partition table.Type: GrantFiled: October 27, 2020Date of Patent: July 5, 2022Assignee: Marvell Asia Pte, Ltd.Inventors: Shubhendu Sekhar Mukherjee, Thomas F. Hummel
-
Patent number: 11372762Abstract: Various embodiments described herein provide for using a prefetch buffer with a cache of a memory sub-system to store prefetched data (e.g., data prefetched from the cache), which can increase read access or sequential read access of the memory sub-system over that of traditional memory sub-systems.Type: GrantFiled: July 14, 2020Date of Patent: June 28, 2022Assignee: Micron Technology, Inc.Inventor: Ashay Narsale
-
Patent number: 11341051Abstract: Techniques for consolidating shared state for translation lookaside buffer (TLB) shootdowns are provided. In one set of embodiments, an operating system (OS) kernel of a computer system can co-locate, in a system memory of the computer system, a plurality of shared data accessed by first and second processing cores of the computer system for performing a translation lookaside buffer (TLB) shootdown of the first processing core by the second processing core, where the co-locating allows the plurality of shared data to occupy a single cache line when brought from the system memory into a CPU (central processing unit) cache of the first or second processing core. This can include, e.g.Type: GrantFiled: September 15, 2020Date of Patent: May 24, 2022Assignee: VMWARE, INC.Inventors: Michael Wei, Nadav Amit, Amy Tai
-
Patent number: 11327765Abstract: Embodiments of the present disclosure provide an apparatus, comprising: one or more instruction executing circuitries, wherein each instruction executing circuitry of the one or more instruction executing circuitries is configured to execute an instruction of a corresponding instruction type, and an instruction scheduling circuitry that is communicatively coupled to the one or more instruction executing circuitries, the instruction scheduling circuitry is configured to: determine according to an instruction type of the instruction and a number of instructions that have been allocated to the one or more instruction executing circuitries, an instruction executing circuitry from the one or more instruction executing circuitries to schedule the instruction for execution, and allocated the instruction to the determined instruction executing circuitry.Type: GrantFiled: August 6, 2020Date of Patent: May 10, 2022Assignee: Alibaba Group Holding LimitedInventors: Chang Liu, Tao Jiang
-
Patent number: 11314865Abstract: A pluggable trust architecture addresses the problem of establishing trust in hardware. The architecture has low impact on system performance and comprises a simple, user-supplied, and pluggable hardware element. The hardware element physically separates the untrusted components of a system from peripheral components that communicate with the external world. The invention only allows results of correct execution of software to be communicated externally.Type: GrantFiled: July 31, 2018Date of Patent: April 26, 2022Assignee: THE TRUSTEES OF PRINCETON UNIVERSITYInventors: David I. August, Stephen Beard, Soumyadeep Ghosh
-
Patent number: 11262991Abstract: A method for thread-safe development of a computer program configured for parallel thread execution comprises maintaining a digital record of read or write access to a data object from each of a plurality of sibling threads executing on a computer system. Pursuant to each instance of read or write access from a given sibling thread, an entry comprising an indicator of the access type is added to the digital record. The method further comprises assessing the thread safety of the read or write access corresponding to each entry in the digital record and identifying one or more thread-unsafe instances of read or write access based on the assessment of thread safety.Type: GrantFiled: August 27, 2020Date of Patent: March 1, 2022Assignee: Microsoft Technology Licensing, LLCInventor: Joel Stephen Pritchett
-
Patent number: 11250006Abstract: A streaming ingest platform can improve latency and expense issues related to uploading data into a cloud data system. The streaming ingest platform can organize the data to be ingested into per-table chunks and per-account blobs. This data may be committed and may be made available for query processing before it is ingested into the target source tables. This significantly improves latency issues. The streaming ingest platform can also accommodate uploading data from various sources with different processing and communication capabilities, such as Internet of Things (IOT) devices.Type: GrantFiled: July 27, 2021Date of Patent: February 15, 2022Assignee: Snowflake Inc.Inventors: Tyler Arthur Akidau, Istvan Cseri, Tyler Jones, Daniel E. Sotolongo, Zhuo Zhang
-
Patent number: 11226822Abstract: A digital data processor includes an instruction memory storing instructions specifying a data processing operation and a data operand field, an instruction decoder coupled to the instruction memory for recalling instructions from the instruction memory and determining the operation and the data operand, and an operational unit coupled to a data register file and to an instruction decoder to perform a data processing operation upon an operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The operational unit is configured to perform a table write in response to a look up table initialization instruction by duplicating at least one data element from a source data register to create duplicated data elements, and writing the duplicated data elements to a specified location in a specified number of at least one table and a corresponding location in at least one other table.Type: GrantFiled: September 13, 2019Date of Patent: January 18, 2022Assignee: Texas Instmments IncorporatedInventors: Naveen Bhoria, Dheera Balasubramanian Samudrala, Duc Bui, Rama Venkatasubramanian
-
Patent number: 11216377Abstract: A mechanism is provided by which a hardware accelerator detects migration of a software process among processors and uses this information to write operation results to an appropriate cache memory for faster access by the current processor. This mechanism is provided, in part, by incorporation within the hardware accelerator of a mapping table having entries including a cache memory identifier associated with a processor identifier. The hardware accelerator further includes circuitry configured to receive a processor identifier from a calling processor, and to perform a look-up in the mapping table to determine the cache memory identifier associated with the processor identifier. The hardware accelerator uses the associated cache memory identifier to write results of called operations to the cache memory associated with the calling processor, thereby accelerating subsequent operations by the calling processor that rely upon the hardware accelerator results.Type: GrantFiled: December 18, 2019Date of Patent: January 4, 2022Assignee: NXP USA, Inc.Inventors: Allen Lengacher, David Philip Lapp, Roy Jonathan Pledge
-
Patent number: 11200169Abstract: A processing node of a storage system may determine that a host system is implementing a cache-slot aware, round-robin IO distribution algorithm (CA-RR). The processing node may be configured to determine when a sufficient number of sequential IOs will be received to consume a cache slot of the a processing node. If the processing node knows that the host system is implementing CA-RR, then, in response to determining the sufficient number, the processing node may send a communication informing the next processing node about the sequential cache slot hit. If the sequential IO operation(s) are read operation(s), the next processing node may prefetch at least a cache-slot worth of next consecutive data portions. If the sequential IO operation(s) are write operation(s), then the next processing node may request allocation of one or more local cache slots for the forthcoming sequential write operations.Type: GrantFiled: January 30, 2020Date of Patent: December 14, 2021Assignee: EMC IP Holding Company LLCInventors: Jack Fu, Jaeyoo Jung, Arieh Don
-
Patent number: 11194617Abstract: A method includes receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache. The write request specifies write data. The method also includes generating, by the L2 controller, a read request for the address; reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request; updating, by the L2 controller, a data field of the entry with the write data; updating, by the L2 controller, an enable field of the entry associated with the write data; and receiving, by the L2 controller, the read data and merging the read data into the data field of the entry.Type: GrantFiled: May 22, 2020Date of Patent: December 7, 2021Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson
-
Patent number: 11194721Abstract: To deliver up-to-date, coherent user data to applications upon request, the disclosed technology includes systems and methods for caching data and metadata after it has been synchronously loaded—for future retrieval with a page load time close to zero milliseconds. To provide this experience, data needs to be stored as locally to a user as possible, in the cache on the local device or in an edge cache located geographically nearby, for use in responding to requests. Applications which maintain caches of API results can be notified of their invalidation, and can detect the invalidation, propagate the invalidation to any further client tiers with the appropriate derivative type mapping, and refresh their cached values so that clients need not synchronously make the API requests again—insuring that the client has access to the most up-to-date copy of data as inexpensively as possible—in terms of bandwidth and latency.Type: GrantFiled: March 17, 2020Date of Patent: December 7, 2021Assignee: salesforce.com, inc.Inventor: Richard Perry Pack, III
-
Patent number: 11159636Abstract: A data processing apparatus is provided, which includes receiving circuitry to receive a snoop request in respect of requested data on behalf of a requesting node. The snoop request includes an indication as to whether forwarding is to occur. Transmitting circuitry transmits a response to the snoop request and cache circuitry caches at least one data value. When forwarding is to occur and the at least one data value includes the requested data, the response includes the requested data and the transmitting circuitry transmits the response to the requesting node.Type: GrantFiled: February 8, 2017Date of Patent: October 26, 2021Assignee: ARM LIMITEDInventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce
-
Patent number: 11157415Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more page walk caches, where operation includes: receiving, at a load/store slice, an instruction to be issued; determining, at the load/store slice, a process type indicating a source of the instruction to be a host process or a guest process; and determining, in accordance with an allocation policy and in dependence upon the process type, an allocation of an entry of the page walk cache, wherein the page walk cache comprises one or more entries for both host processes and guest processes.Type: GrantFiled: December 20, 2019Date of Patent: October 26, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dwain A. Hicks, Jonathan H. Raymond, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
-
Patent number: 11144470Abstract: Method for managing a cache memory comprising: the transformation of a received set address in order to find a word in the cache memory, into a transformed set address by means of a bijective transformation function, the selection of one or more line tags stored in the cache memory at the transformed set address. in which: the transformation function is parameterized by a parameter q such that the transformed set address obtained depends both on the received set address and on the value of this parameter q, and for all the non-zero values of the parameter q, the transformation function permutes at least 50% of the set addresses, and during the same execution of the process, a new value of the parameter q is repeatedly generated for modifying the transformation function.Type: GrantFiled: December 16, 2019Date of Patent: October 12, 2021Assignee: Commissariat A L'Energie Atomique et aux Energies AlternativesInventors: Thomas Hiscock, Mustapha El Majihi, Olivier Savry
-
Patent number: 11138086Abstract: A computing system for collecting hardware performance data includes a number of programmable counters associated with a number of units of a computing device. The computing system further includes an assignment module executed by a processor to assign a plurality of interleaving groups of counters based on a user-defined priority list of parameters.Type: GrantFiled: January 28, 2015Date of Patent: October 5, 2021Assignee: Hewlett-Packard Development Company, L.P.Inventors: Raphael Gay, Peter Christian Peterson, Finagnon Thierry Dossou, Jr.
-
Patent number: 11119953Abstract: A data access method. The method is applied to a first controller, and the method includes: receiving a destination address sent by each shared cache apparatus, where the destination address is used to indicate an address at which data is to be written into the shared cache apparatus; receiving information carrying the data; and sending the destination address and the data to the shared cache apparatus that sends the destination address, so that each shared cache apparatus stores the data in storage space to which the destination address points.Type: GrantFiled: January 15, 2020Date of Patent: September 14, 2021Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Chao Zhou, Peiqing Zhou
-
Patent number: 11106467Abstract: Apparatus and methods are disclosed for implementing incremental schedulers for out-of-order block-based processors, including field programmable gate array implementations. In one example of the disclosed technology, a processor includes an instruction scheduler formed by configuring one or more look up table RAMs to store ready state data for a plurality of instructions in an instruction block. The instruction scheduler further includes a plurality of queues that store ready state data for the processor and sends dependency information to ready determination logic on a first in/first out basis. The instruction scheduler selects one or more of the ready instructions to be issued and executed by the block-based processor.Type: GrantFiled: July 29, 2016Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Aaron L. Smith, Jan S. Gray
-
Patent number: 11100111Abstract: A streaming ingest platform can improve latency and expense issues related to uploading data into a cloud data system. The streaming ingest platform can organize the data to be ingested into per-table chunks and per-account blobs. This data may be committed and may be made available for query processing before it is ingested into the target source tables. This significantly improves latency issues. The streaming ingest platform can also accommodate uploading data from various sources with different processing and communication capabilities, such as Internet of Things (IOT) devices.Type: GrantFiled: April 9, 2021Date of Patent: August 24, 2021Assignee: Snowflake Inc.Inventors: Tyler Arthur Akidau, Istvan Cseri, Tyler Jones, Daniel E. Sotolongo, Zhuo Zhang
-
Patent number: 11099952Abstract: Populating cache of a virtual server. A data record is generated that is associated with a first virtual server. A set of data is saved that describes data in a cache that is associated with the first virtual server. In response to either (i) a failover of the first virtual server or (ii) a migration request for the first virtual server, a cache of a second virtual server is populated based on the set of data.Type: GrantFiled: November 6, 2018Date of Patent: August 24, 2021Assignee: International Business Machines CorporationInventors: Vamshikrishna Thatikonda, Sanket Rathi, Venkata N. S. Anumula
-
Patent number: 11093398Abstract: Embodiments may include systems and methods for performing remote memory operations in a shared memory address space. An apparatus includes a first network controller coupled to a first processor core. The first network controller processes a remote memory operation request, which is generated by a first memory coherency agent based on a first memory operation for an application operating on the first processor core. The remote memory operation request is associated with a remote memory address that is local to a second processor core coupled to the first processor core. The first network controller forwards the remote memory operation request to a second network controller coupled to the second processor core. The second processor core and the second network controller are to carry out a second memory operation to extend the first memory operation as a remote memory operation. Other embodiments may be described and/or claimed.Type: GrantFiled: June 28, 2019Date of Patent: August 17, 2021Assignee: Intel CorporationInventors: Kshitij Doshi, Harald Servat, Francesc Guim Bernat
-
Patent number: 11036279Abstract: An apparatus and method are provided for managing a cache. The cache is arranged to comprise a plurality of cache sections, where each cache section is powered independently of the other cache sections in the plurality of cache sections, and the apparatus has power control circuitry to control power to each of the cache sections. The power control circuitry is responsive to a trigger condition indicative of an ability to operate the cache in a power saving mode, to perform a latency evaluation process to determine a latency indication for each of the cache sections, and to control which of a subset of the cache sections to power off in dependence on the latency indication. This can allow the power consumption savings realised by turning off one or more cache sections to be optimised to take into account the current system state.Type: GrantFiled: April 29, 2019Date of Patent: June 15, 2021Assignee: Arm LimitedInventor: Alex James Waugh
-
Patent number: 11032126Abstract: A framework in a cloud network that may allow for debugging at multiple vantage points at different layers (e.g., layer 2, layer 3, etc.). The methods may provide tracer or measurement services that filter, capture, or forward flows that may include packets, calls, or protocols to look for particular signatures.Type: GrantFiled: October 7, 2019Date of Patent: June 8, 2021Assignee: AT&T Intellectual Property I, L.P.Inventors: Byoung-Jo Kim, Yang Xu, Muhammad Bilal Anwer
-
Patent number: 10997076Abstract: An apparatus has first processing circuitry and second processing circuity. The second processing circuitry has at least one hardware mechanism providing a greater level of fault protection or fault detection than is provided for the first processing circuitry. Coherency control circuitry controls access to data from at least part of a shared address space by the first and second processing circuitry according to an asymmetric coherency protocol in which a local-only update of data in a local cache of the first processing circuitry is restricted in comparison to a local-only update of data in a local cache of the second processing circuitry.Type: GrantFiled: September 14, 2016Date of Patent: May 4, 2021Assignee: ARM LimitedInventors: Antony John Penton, Simon John Craske
-
Patent number: 10977035Abstract: A data processing system includes at least one processing unit and a memory controller coupled to a system memory. The processing unit includes a processor core and a cache memory including an arithmetic logic unit (ALU). The cache memory is configured to receive, from the processor core, an atomic memory operation (AMO) request specifying a target address of a data granule to be updated by an AMO and a location indication. Based on the location indication having a first setting, the AMO indicated by the AMO request is performed in the cache memory utilizing the ALU. Based on the location indication having a different second setting, the cache memory issues the AMO request to the memory controller to cause the AMO to be performed at the memory controller.Type: GrantFiled: February 18, 2019Date of Patent: April 13, 2021Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie