Snooping Patents (Class 711/146)
-
Patent number: 12222864Abstract: Data can be rapidly and flexibly converted. A data conversion apparatus stores monitoring target management information in which a monitoring target is associated with a controller type representing a type of a controller configured to control the monitoring target, and data conversion rule information in which a data set in units of the controller type is registered, the data set in units of the controller type defining a conversion rule indicating a data conversion method for the controller type. A calculation unit loads the data conversion rule information into the cache memory, specifies a monitoring target corresponding to data received from an edge device, specifies a controller type corresponding to the specified monitoring target by referring to the monitoring target management information, reads a conversion rule corresponding to the specified controller type from the cache memory, and converts the data using the conversion rule.Type: GrantFiled: March 7, 2022Date of Patent: February 11, 2025Assignee: HITACHI, LTD.Inventors: Naushad Shakoor, Koichi Okita, Yuta Sekiguchi
-
Patent number: 12216580Abstract: A peripheral device includes a processor, a memory interface, a host interface and a cache controller. The processor executes software code. The cache memory caches a portion of the software code. The memory interface communicates with a NVM storing a replica of the software code. The host interface communicates with hosts storing additional replicas of the software code. The cache controller is to determine whether each host is allocated for code fetching, to receive a request from the processor for a segment of the software code, when available in the cache memory to fetch the segment from the cache memory, when unavailable in the cache memory and at least one host is allocated, to fetch the segment from the hosts that are allocated, when unavailable in the cache memory and no host is allocated, to fetch the segment from the NVM, and to serve the fetched segment to the processor.Type: GrantFiled: August 28, 2023Date of Patent: February 4, 2025Assignee: Mellanox Technologies, Ltd.Inventors: Yaniv Strassberg, Guy Harel, Gabi Liron, Yuval Itkin
-
Patent number: 12216913Abstract: A system and a method are disclosed that provides atomicity for large data writes to persistent memory of an object storage system. A segment of persistent memory is allocated to an application. The persistent memory includes non-volatile memory that is accessible in a random access, byte-addressable manner. The segment of persistent memory is associated with first and second bits of a bitmap. The first bit is set indicating that the segment of persistent memory has been allocated. Data is received from the application for storage in the segment of persistent memory, and the second bit is set indicating that data in the segment of persistent memory has been finalized and is ready for storage in a storage medium that is different from persistent memory. The atomicity of the data in persistent memory may be determined based on the first bit and the second bit being set.Type: GrantFiled: August 15, 2022Date of Patent: February 4, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Angel Benedicto Aviles, Jr., Vinod Kumar Daga, Vamsikrishna Sadhu, Tejas Hunsur Krishna
-
Patent number: 12210455Abstract: A computer system that records a replayable execution trace based on recording cache coherency protocol (CCP) messages into a first trace, and on recording memory snapshot(s) into a second trace. Based on determining that tracing of execution of a first execution context is to be enabled, the computer system initiates logging, into the second trace, of one or more memory snapshots of a memory space of the first execution context, and enables a hardware tracing feature of a processor. Enabling the tracing feature causes the processor to log, into the first trace, CCP message(s) generated in response to one or more memory access into the memory space of the first execution context. After enabling the hardware tracing feature of the processor, the computer system also logs or otherwise handles a write into the memory space of the first execution context by a second execution context.Type: GrantFiled: April 30, 2021Date of Patent: January 28, 2025Assignee: Microsoft Technology Licensing, LLCInventor: Jordi Mola
-
Patent number: 12208812Abstract: A vehicle device includes a plurality of CPU modules, a plurality of cache memories respectively provided for the plurality of CPU modules, a specifying unit configured to specify a shared region shared by the plurality of CPU modules, and a region arrangement unit configured to arrange the shared region specified by the specifying unit in a main memory.Type: GrantFiled: October 12, 2021Date of Patent: January 28, 2025Assignee: DENSO CORPORATIONInventor: Nobuhiko Tanibata
-
Patent number: 12204471Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: GrantFiled: September 7, 2023Date of Patent: January 21, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Patent number: 12204454Abstract: Systems, apparatuses, and methods for employing system probe filter aware last level cache insertion bypassing policies are disclosed. A system includes a plurality of processing nodes, a probe filter, and a shared cache. The probe filter monitors a rate of recall probes that are generated, and if the rate is greater than a first threshold, then the system initiates a cache partitioning and monitoring phase for the shared cache. Accordingly, the cache is partitioned into two portions. If the hit rate of a first portion is greater than a second threshold, then a second portion will have a non-bypass insertion policy since the cache is relatively useful in this scenario. However, if the hit rate of the first portion is less than or equal to the second threshold, then the second portion will have a bypass insertion policy since the cache is less useful in this case.Type: GrantFiled: October 29, 2021Date of Patent: January 21, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Paul James Moyer, Jay Fleischman
-
Patent number: 12182661Abstract: In some aspects, a hybrid quantum-classical computing platform may comprise: a first quantum processor unit (QPU); a second QPU; and a shared classical memory, the shared classical memory being connected to both the first QPU and the second QPU, wherein the shared classical memory is configured to share data between the first QPU and the second QPU. In some embodiments, the first QPU operates at a higher repetition rate and/or clock rate than the second QPU and the second QPU operates with a higher fidelity than the first QPU.Type: GrantFiled: November 13, 2020Date of Patent: December 31, 2024Assignee: Rigetti & Co, LLCInventors: Chad Tyler Rigetti, William J. Zeng, Blake Robert Johnson, Nikolas Anton Tezak
-
Patent number: 12147528Abstract: While an application or a virtual machine (VM) is running, a device tracks accesses to cache lines to detect access patterns that indicate security attacks, such as cache-based side channel attacks or row hammer attacks. To enable the device to detect accesses to cache lines, the device is connected to processors via a coherence interconnect, and the application/VM data is stored in a local memory of the device. The device collects the cache lines of the application/VM data that are accessed while the application/VM is running into a buffer and the buffer is analyzed for access patterns that indicate security attacks.Type: GrantFiled: July 22, 2021Date of Patent: November 19, 2024Assignee: VMware LLCInventors: Irina Calciu, Andreas Nowatzyk, Pratap Subrahmanyam
-
Patent number: 12141482Abstract: Data transformation and data quality checking is provided by reading data from a source datastore and storing the data into memory, performing in-memory processing of the data stored in memory, where the data is maintained in-memory for performance of the in-memory processing thereof, and where the in-memory processing includes performing one or more transformations on the data stored in memory, in which the data stored in memory is transformed and stored back into the memory and applying one or more data quality rules to the data stored in-memory, and based on performing the in-memory processing of the data stored and maintained in memory for the in-memory processing, loading to a target datastore at least some of the data processed by the in-memory processing.Type: GrantFiled: September 23, 2021Date of Patent: November 12, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Hemant Kumar Sivaswamy, Pushpender Kumar Garg, Rohit Jain
-
Patent number: 12141601Abstract: A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions.Type: GrantFiled: August 28, 2023Date of Patent: November 12, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
-
Patent number: 12141065Abstract: In at least one embodiment, processing can include determining, by a first node, an update to a metadata (MD) page, wherein the first node includes a first cache; sending, from the first node to a second node, a commit message including the update to the MID page; receiving, at the second node, the commit message from the first node; and storing, by the second node, an updated version of the MID page in a second cache of the second node only if the second cache of the second node includes a cached copy of the MD page, wherein the updated version of the MID page, as stored in the second cache of the second node, is constructed by applying the first update to the cached copy of the first MD page.Type: GrantFiled: March 1, 2023Date of Patent: November 12, 2024Assignee: Dell Products L.P.Inventors: Ami Sabo, Vladimir Shveidel, Dror Zalstein
-
Patent number: 12135646Abstract: A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.Type: GrantFiled: May 30, 2023Date of Patent: November 5, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson, Timothy David Anderson, Kai Chirca
-
Patent number: 12105655Abstract: A system for optimizing AHB bus data transmission performance includes a master; a decoder connected to the master; a first multiplexer connected to the decoder; and a plurality of slaves connected to the first multiplexer, the decoder, and the master, where the decoder is configured to output a slave selection signal, and determine a slave in transmission communication with the master based on the slave selection signal. The first multiplexer is configured to receive a transmission complete signal output by each slave, and select a transmission complete signal of a corresponding slave based on a first selection signal and output it to the corresponding slave, where the first selection signal is formed by beating the slave selection signal.Type: GrantFiled: May 26, 2022Date of Patent: October 1, 2024Assignee: Suzhou MetaBrain Intelligent Technology Co., Ltd.Inventor: Zenghe Wang
-
Patent number: 12099448Abstract: A cache memory subsystem includes virtually-indexed L1 and PIPT L2 set-associative caches having an inclusive allocation policy such that: when a first copy of a memory line specified by a physical memory line address (PMLA) is allocated into an L1 entry, a second copy of the line is also allocated into an L2 entry; when the second copy is evicted, the first copy is also evicted. For each value of the PMLA, the second copy can be allocated into only one L2 set, and an associated physical address proxy (PAP) for the PMLA includes a set index and way number that uniquely identifies the entry. For each value of the PMLA there exist two or more different L1 sets into which the first copy can be allocated, and when the L2 evicts the second copy, the L1 uses the PAP of the PMLA to evict the first copy.Type: GrantFiled: May 24, 2022Date of Patent: September 24, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 12093184Abstract: A processor-based system for allocating a higher-level cache line in a higher-level cache memory in response to an eviction request of a lower-level cache line is disclosed. The processor-based system determines whether the cache line is opportunistic, sets an opportunistic indicator to indicate that the lower-level cache line is opportunistic, and communicates the lower-level cache line and the opportunistic indicator. The processor-based system determines, based on the opportunistic indicator of the lower-level cache line, whether a higher-level cache line of a plurality of higher-level cache lines in the higher-level cache memory has less or equal importance than the lower-level cache line. In response, the processor-based system replaces the higher-level cache line in the higher-level cache memory with the lower-level cache line and associates the opportunistic indicator with the lower-level cache line in the higher-level cache memory.Type: GrantFiled: February 15, 2023Date of Patent: September 17, 2024Assignee: QUALCOMM IncorporatedInventor: Ramkumar Srinivasan
-
Patent number: 12093180Abstract: A device includes a memory controller and a cache memory coupled to the memory controller. The cache memory has a first set of cache lines associated with a first memory block and comprising a first plurality of cache storage locations, as well as a second set of cache lines associated with a second memory block and comprising a second plurality of cache storage locations. A first location of the second plurality of cache storage locations comprises cache tag data for both the first set of cache lines and the second set of cache lines.Type: GrantFiled: June 29, 2022Date of Patent: September 17, 2024Assignee: Rambus Inc.Inventors: Michael Raymond Miller, Dennis Doidge, Collins Williams
-
Patent number: 12079475Abstract: A ferroelectric memory chiplet in a multi-dimensional packaging. The multi-dimensional packaging includes a first die comprising a switch and a first plurality of input-output transceivers. The multi-dimensional packaging includes a second die comprising a processor, wherein the second die includes a second plurality of input-output transceivers coupled to the first plurality of input-output transceivers. The multi-dimensional packaging includes a third die comprising a coherent cache or memory-side buffer, wherein the coherent cache or memory-side buffer comprises ferroelectric memory cells, wherein the coherent cache or memory-side buffer is coupled to the second die via I/Os. The dies are wafer-to-wafer bonded or coupled via micro-bumps, copper-to-copper hybrid bond, or wire bond, Flip-chip ball grid array routing, chip-on-wafer substrate, or embedded multi-die interconnect bridge.Type: GrantFiled: April 13, 2021Date of Patent: September 3, 2024Assignee: Kepler Computing Inc.Inventors: Amrita Mathuriya, Christopher B. Wilkerson, Rajeev Kumar Dokania, Debo Olaosebikan, Sasikanth Manipatruni
-
Patent number: 12056073Abstract: An address space field is used in conjunction with a normal address field to allow indication of an address space for the particular address value. In one instance, one address space value is used to indicate the bypassing of the address translation used between address spaces. A different address space value is designated for conventional operation, where address translations are performed. Other address space values are used to designate different transformations of the address values or the data. This technique provides a simplified format for handling address values and the like between different devices having different address spaces, simplifying overall computer system design and operation.Type: GrantFiled: September 16, 2022Date of Patent: August 6, 2024Assignee: Texas Instruments IncorporatedInventors: Brian Karguth, Chuck Fuoco, Chunhua Hu, Todd Christopher Hiers
-
Patent number: 12050536Abstract: The present disclosure includes apparatuses and methods for compute enabled cache. An example apparatus comprises a compute component, a memory and a controller coupled to the memory. The controller configured to operate on a block select and a subrow select as metadata to a cache line to control placement of the cache line in the memory to allow for a compute enabled cache.Type: GrantFiled: March 6, 2023Date of Patent: July 30, 2024Inventor: Richard C. Murphy
-
Patent number: 12032500Abstract: In one embodiment, a fabric circuit is to receive requests for ownership and data commits from an agent. The fabric circuit includes a control circuit to maintain statistics regarding the requests for ownership and the data commits and throttle the fabric circuit based at least in part on the statistics. Other embodiments are described and claimed.Type: GrantFiled: September 16, 2020Date of Patent: July 9, 2024Assignee: Intel CorporationInventors: Swadesh Choudhary, Ajit Krisshna Nandyal Lakshman, Doddaballapur Jayasimha
-
Patent number: 12035636Abstract: A magnetic memory includes a first spin-orbital-transfer-spin-torque-transfer (SOT-STT) hybrid magnetic device disposed over a substrate, a second SOT-STT hybrid magnetic device disposed over the substrate, and a SOT conductive layer connected to the first and second SOT-STT hybrid magnetic devices. Each of the first and second SOT-STT hybrid magnetic devices includes a first magnetic layer, as a magnetic free layer, a spacer layer disposed under the first magnetic layer, and a second magnetic layer, as a magnetic reference layer, disposed under the spacer layer. The SOT conductive layer is disposed over the first magnetic layer of each of the first and second SOT-STT hybrid magnetic devices.Type: GrantFiled: April 27, 2023Date of Patent: July 9, 2024Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.Inventors: Ji-Feng Ying, Jhong-Sheng Wang, Tsann Lin
-
Patent number: 12020037Abstract: The techniques disclosed herein implement a centralized lighting module configured to control a diverse set of lighting-enabled peripheral devices. The set of lighting-enabled peripheral devices is diverse with respect to a type and a manufacturer. The lighting module is referred to as a centralized lighting module because the lighting module is part of an operating system of a computing device. Consequently, a user of the computing device no longer has to download and learn to use multiple different lighting applications if the user wants to create a diverse lighting ecosystem in which lighting-enabled peripheral devices from different manufacturers are connected to the computing device. Similarly, a developer of a computing application no longer has to engage and interact with multiple application programming interfaces (APIs) and software development kits (SDKs) if the developer wants users of their computing application to be able to create a diverse lighting ecosystem.Type: GrantFiled: June 29, 2022Date of Patent: June 25, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Connor Colin Marwan Al-Joundi, Eric Norman Badger, Tyler Duckworth, Stephanie Ling Diao, Emily Lynn Bender, Jerome Stephen Healy, Jan-Kristian Markiewicz, Sophia Sixue Chen
-
Patent number: 12019564Abstract: Methods and apparatuses related to access to data stored in quarantined memory media are described. Memory systems can include multiple types of memory media (e.g., volatile and/or non-volatile) and data (e.g., information included in) stored in the memory media often are subject to risks of the data being undesirably exposed to the public. For example, requests to write data in the memory media can often be made and accepted without a user's awareness, which can lead to the undesirable exposure of the data. According to embodiments of the present disclosure, a particular portion and/or location in the memory media can provide a data protection scheme such that data stored in the particular location can be refrained from being transferred out of the computing system.Type: GrantFiled: January 19, 2023Date of Patent: June 25, 2024Assignee: Micron Technology, Inc.Inventors: Radhika Viswanathan, Bhumika Chhabra, Carla L. Christensen, Zahra Hosseinimakarem
-
Patent number: 12001343Abstract: A data processing system includes a plurality of processing nodes communicatively coupled to a system fabric. Each of the processing nodes includes a respective plurality of processor cores. Logical partition (LPAR) information for each of a plurality of LPARs is maintained in a register set in each of the processor cores, where the LPAR information indicates, for each of the LPARs, which of the processing nodes may hold an address translation entry for each LPAR. Based on the LPAR information, a first processor core selects a broadcast scope for a multicast request on the system fabric that includes fewer than all of the plurality of processing nodes and issues the multicast request with the selected broadcast scope. The first processor core updates the LPAR information in the register set of a second processor core in another of the plurality of processing nodes via an inter-processor interrupt.Type: GrantFiled: December 30, 2022Date of Patent: June 4, 2024Assignee: International Business Machines CorporationInventors: Derek E. Williams, Florian Auernhammer
-
Patent number: 11983538Abstract: Techniques are disclosed relating to a processor load-store unit. In some embodiments, the load-store unit is configured to execute load/store instructions in parallel using first and second pipelines and first and second tag memory arrays. In tag write conflict situations, the load-store unit may arbitrate between the first and second pipelines to ensure the first and second tag memory array contents remain identical. In some embodiments, a data cache tag replay scheme is utilized. In some embodiments, executing load/store instructions in parallel with fills, probes, and store-updates, using separate but identical tag memory arrays, may advantageously improve performance.Type: GrantFiled: April 18, 2022Date of Patent: May 14, 2024Assignee: Cadence Design Systems, Inc.Inventors: Robert T. Golla, Ajay A. Ingle
-
Patent number: 11966330Abstract: Examples described herein relate to processor circuitry to issue a cache coherence message to a central processing unit (CPU) cluster by selection of a target cluster and issuance of the request to the target cluster, wherein the target cluster comprises the cluster or the target cluster is directly connected to the cluster. In some examples, the selected target cluster is associated with a minimum number of die boundary traversals. In some examples, the processor circuitry is to read an address range for the cluster to identify the target cluster using a single range check over memory regions including local and remote clusters. In some examples, issuance of the cache coherence message to a cluster is to cause the cache coherence message to traverse one or more die interconnections to reach the target cluster.Type: GrantFiled: June 5, 2020Date of Patent: April 23, 2024Assignee: Intel CorporationInventors: Vinit Mathew Abraham, Jeffrey D. Chamberlain, Yen-Cheng Liu, Eswaramoorthi Nallusamy, Soumya S. Eachempati
-
Patent number: 11966337Abstract: Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for providing rolling updates of distributed systems with a shared cache. An embodiment operates by receiving a data item key corresponding to a request from a user profile operating on a media player and receiving a version identifier corresponding to a first version of an application operating on the media player. It is determined that a shared cache includes a first value and second value for the data item key. A key component is generated corresponding to the user profile. Both the generated key component and the data item key are provided to the shared cache, and the first value of the data item as stored in the shared cache is received. The first value of the first version of the data item is updated.Type: GrantFiled: March 1, 2023Date of Patent: April 23, 2024Assignee: Roku, Inc.Inventor: Bill Ataras
-
Patent number: 11960395Abstract: The present disclosure generally relates to more efficient use of a delta buffer. To utilize the delta buffer, an efficiency can be gained by utilizing absolute delta entries and relative delta entries. The absolute delta entry will include the type of delta entry, the L2P table index, the L2P table offset, and the PBA. The relative delta entry will include the type of delta entry, the L2P table offset, and the PBA offset. The relative delta entry will utilize about half of the storage space of the absolute delta entry. The relative delta entry can be used after an absolute delta entry so long as the relative delta entry is for data stored in the same block as the previous delta entry. If data is stored in a different block, then the delta entry will be an absolute delta entry.Type: GrantFiled: July 21, 2022Date of Patent: April 16, 2024Assignee: Western Digital Technologies, Inc.Inventors: Amir Shaharabany, Shay Vaza
-
Patent number: 11954033Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.Type: GrantFiled: October 19, 2022Date of Patent: April 9, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Ganesh Balakrishnan, Amit Apte, Ann Ling, Vydhyanathan Kalyanasundharam
-
Patent number: 11941428Abstract: Techniques are disclosed relating to an I/O agent circuit. The I/O agent circuit may include one or more queues and a transaction pipeline. The I/O agent circuit may issue, to the transaction pipeline from a queue of the one or more queues, a transaction of a series of transactions enqueued in a particular order. The I/O agent circuit may generate, at the transaction pipeline, a determination to return the transaction to the queue based on a detection of one or more conditions being satisfied. Based on the determination, the I/O agent circuit may reject, at the transaction pipeline, up to a threshold number of transactions that issued from the queue after the transaction issued. The I/O agent circuit may insert the transaction at a head of the queue such that the transaction is enqueued at the queue sequentially first for the series of transactions according to the particular order.Type: GrantFiled: March 31, 2022Date of Patent: March 26, 2024Assignee: Apple Inc.Inventors: Sagi Lahav, Lital Levy-Rubin, Gaurav Garg, Gerard R. Williams, III, Samer Nassar, Per H. Hammarlund, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan, Jeff Gonion
-
Patent number: 11941295Abstract: A data storage device and method for providing an adaptive data path are disclosed. In one embodiment, a data storage device is in communication with a host comprising a first processor (e.g., a graphics processing unit (GPU)), a second processor (e.g., a central processing unit (CPU)), and a queue. The data storage device chooses a data path to use to communicate with the queue based on whether the queue is associated with the first processor or with the second processor. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.Type: GrantFiled: January 11, 2022Date of Patent: March 26, 2024Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, Judah Gamliel Hahn, Ariel Navon
-
Patent number: 11934548Abstract: Methods for centralized access control for cloud relational database management system resources are performed by systems and devices. The methods utilize a central policy storage, managed externally to database servers, which stores external policies for access to internal database resources at up to fine granularity. Database servers in the processing system each receive external access policies that correspond to users of the system by push or pull operations from the central policy storage, and store the external access policies in a cache of the database servers for databases. For resource access, access conditions are determined via policy engines of database servers based on an external access policy in the cache that corresponds to a user, responsive to a resource access request from a device of the user specifying the internal resource. Data associated with the resource is provided to the user based on the access condition being met.Type: GrantFiled: August 12, 2021Date of Patent: March 19, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yueren Wang, Elnata Degefa, Andreas Wolter, Steven Richard Gott, Nitish Gupta, Raghav Kaushik, Rakesh Khanduja, Shafi Ahmad, Dilli Dorai Minnal Arumugam, Pankaj Prabhakar Naik, Nikolas Christopher Ogg
-
Patent number: 11928056Abstract: The present technology relates to an electronic device. A memory controller that increases a hit ratio of a cache memory includes a memory buffer configured to store command data corresponding to a request received from a host, and a cache memory configured to cache the command data. The cache memory stores the command data by allocating cache lines based on a component that outputs the command data and a flag included in the command data.Type: GrantFiled: February 19, 2021Date of Patent: March 12, 2024Assignee: SK hynix Inc.Inventor: Do Hun Kim
-
Patent number: 11907753Abstract: An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.Type: GrantFiled: November 7, 2022Date of Patent: February 20, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Timothy David Anderson, David Matthew Thompson
-
Patent number: 11907125Abstract: A computer-implemented method is provided. The method includes determining whether a rejection of a request is required and determining whether the request is software forward progress (SFP)-likely or SFP-unlikely upon determining that the rejection of the request is required. The method also includes executing a first pseudo random decision to set or not set a requested state of the request in an event the request is SFP-likely or SFP-unlikely, respectively, and rejecting the request following execution of the second pseudo random decision.Type: GrantFiled: April 5, 2022Date of Patent: February 20, 2024Assignee: International Business Machines CorporationInventors: Gregory William Alexander, Tu-An T. Nguyen, Deanna Postles Dunn Berger, Timothy Bronson, Christian Jacobi
-
Patent number: 11899562Abstract: A tracing coprocessor that records execution trace data based on a cache coherency protocol (CCP) message. The tracing coprocessor comprises logic that causes the tracing coprocessor to listen on a bus that is communicatively coupled to a primary processor that executes executable code instructions. The logic also causes the tracing coprocessor to, based on listening on the bus, identify at least one CCP message relating to activity at a processor cache. The logic also causes the tracing coprocessor to identify, from the at least one CCP message, a memory cell consumption by the primary processor. The logic also causes the tracing coprocessor to initiate logging, into an execution trace, at least a memory cell data value consumed by the primary processor in connection with execution of at least one executable code instruction.Type: GrantFiled: July 20, 2021Date of Patent: February 13, 2024Assignee: Microsoft Technology Licensing, LLCInventor: Jordi Mola
-
Patent number: 11880316Abstract: Example methods and systems for input output (IO) request handling based on tracking information are described. One example may involve a computer system configuring, in a cache, a zero-filled logical memory page that is mappable to multiple logical block addresses of a virtual disk. In response to detecting a first IO request to perform zero writing at a logical block address, the computer system may store tracking information indicating that zero writing has been issued. In response to detecting a second IO request to perform a read at the logical block address, the computer system may determine that that zero writing has been issued for the logical block address based on the tracking information. The zero-filled logical memory page may be fetched from the cache to respond to the second IO request, thereby servicing the second IO request from the cache instead of the virtual disk.Type: GrantFiled: February 4, 2022Date of Patent: January 23, 2024Assignee: VMware, Inc.Inventor: Kashish Bhatia
-
Patent number: 11880260Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.Type: GrantFiled: June 25, 2020Date of Patent: January 23, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Elliot H. Mednick, Edward McLellan
-
Patent number: 11860948Abstract: Described are methods, systems and computer readable media for keyed row data selection and processing.Type: GrantFiled: September 30, 2020Date of Patent: January 2, 2024Assignee: Deephaven Data Labs LLCInventors: Charles Wright, Ryan Caudy, David R. Kent, IV, Andrew Baranec, Mark Zeldis, Radu Teodorescu
-
Patent number: 11836525Abstract: A system includes a memory, a processor in communication with the memory, and an operating system (“OS”) executing on the processor. The processor belongs to a processor socket. The OS is configured to pin a workload of a plurality of workloads to the processor belonging to the processor socket. Each respective processor belonging to the processor socket shares a common last-level cache (“LLC”). The OS is also configured to measure an LLC occupancy for the workload, reserve the LLC occupancy for the workload thereby isolating the workload from other respective workloads of the plurality of workloads sharing the processor socket, and maintain isolation by monitoring the LLC occupancy for the workload.Type: GrantFiled: December 17, 2020Date of Patent: December 5, 2023Assignee: Red Hat, Inc.Inventors: Orit Wasserman, Marcel Apfelbaum
-
Patent number: 11836085Abstract: Techniques for performing cache operations are provided. The techniques include, recording an entry indicating that a cache line is exclusive-upgradeable; removing the cache line from a cache; and converting a request to insert the cache line into the cache into a request to insert the cache line in the cache in an exclusive state.Type: GrantFiled: October 29, 2021Date of Patent: December 5, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Paul J. Moyer
-
Patent number: 11809745Abstract: Storage devices include a memory array comprised of a plurality of memory devices. As memory array density increases, multi-pass programming is utilized to reduce negative effects to neighboring memory devices. The use of multi-pass programming requires longer access to the data being programmed. To avoid adding additional lower density or controller memory, data within a host memory is accessed multiple times as needed to provide pieces of data to the memory array, which is configured to comply with the utilized multi-pass programming method. The expected order of the multi-pass programming method can be determined to generate one or more memory pipeline instruction processing queues to direct the components of the storage device memory pipeline to access, re-access, and process the host data in a specific order necessary for delivery to the memory array to comply with the utilized multi-pass programming method.Type: GrantFiled: May 13, 2021Date of Patent: November 7, 2023Assignee: Western Digital Technologies, Inc.Inventor: Rishi Mukhopadhyay
-
Patent number: 11805179Abstract: A method of intelligent persistent mobile device management connectivity, including establishing a session between a mobile device and a mobile device management provider, directing the mobile device by the mobile device management provider to perform a successive operation, maintaining the established session between the mobile device and the mobile device management provider while the mobile device is online and periodically checking the mobile device management communication at a communication frequency, wherein the communication frequency is based on a performance feedback parameter.Type: GrantFiled: January 27, 2022Date of Patent: October 31, 2023Assignee: Addigy, Inc.Inventors: Jason Dettbarn, Javier Carmona, Carlos Ruiz
-
Patent number: 11803484Abstract: A processor applies a software hint policy to a portion of a cache based on access metrics for different test regions of the cache, wherein each test region applies a different software hint policy for data associated with cache entries in each region of the cache. One test region applies a software hint policy under which software hints are followed. The other test region applies a software hint policy under which software hints are ignored. One of the software hint policies is selected for application to a non-test region of the cache.Type: GrantFiled: October 28, 2021Date of Patent: October 31, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Paul Moyer
-
Patent number: 11782832Abstract: In a computer system, a processor and an I/O device controller communicate with each other via a coherence interconnect and according to a cache coherence protocol. Registers of the I/O device controllers are mapped to the cache coherent memory space to allow the processor to treat the registers as cacheable memory. As a result, latency of processor commands executed by the I/O device controller is decreased, and size of data stored in the I/O device controller that can be accessed by the processor is increased from the size of a single register to the size of an entire cache line.Type: GrantFiled: August 25, 2021Date of Patent: October 10, 2023Assignee: VMware, Inc.Inventors: Isam Wadih Akkawi, Andreas Nowatzyk, Pratap Subrahmanyam, Nishchay Dua, Adarsh Seethanadi Nayak, Venkata Subhash Reddy Peddamallu, Irina Calciu
-
Patent number: 11784164Abstract: Described is a packaging technology to improve performance of an AI processing system. An IC package is provided which comprises: a substrate; a first die on the substrate, and a second die stacked over the first die. The first die includes memory and the second die includes computational logic. The first die comprises DRAM having bit-cells. The memory of the first die may store input data and weight factors. The computational logic of the second die is coupled to the memory of the first die. In one example, the second die is an inference die that applies fixed weights for a trained model to an input data to generate an output. In one example, the second die is a training die that enables learning of the weights. Ultra high-bandwidth is changed by placing the first die below the second die. The two dies are wafer-to-wafer bonded or coupled via micro-bumps.Type: GrantFiled: September 10, 2021Date of Patent: October 10, 2023Assignee: KEPLER COMPUTING INC.Inventors: Rajeev Kumar Dokania, Sasikanth Manipatruni, Amrita Mathuriya, Debo Olaosebikan
-
Patent number: 11768625Abstract: A storage device may include: a memory device; a cache memory device including a first cache memory which caches first data among data stored in the plurality of pages and a second cache memory which caches second data among the data stored in the plurality of pages; and a memory controller for counting a number of times that each of the plurality of pages is read and a number of times that each of the plurality of pages is written, based on a read request or a write request which are received from a host, and, moving the first data from the first cache memory to the second cache memory when the first data is stored in a first page and a number of times that the first page is read and a number of times that the first page is written satisfy a predetermined condition.Type: GrantFiled: December 21, 2021Date of Patent: September 26, 2023Assignee: SK hynix Inc.Inventor: Kyung Soo Lee
-
Patent number: 11755485Abstract: The invention relates to a device for use in maintaining cache coherence in a multiprocessor computing system. The snoop filter device is connectable with a plurality of cache elements, where each cache element comprises a number of cache agents. The snoop filter device comprises a plurality of snoop filter storage locations, where each snoop filter storage location is mapped to one cache element.Type: GrantFiled: March 13, 2020Date of Patent: September 12, 2023Assignee: NUMASCALE ASInventors: Thibaut Palfer-Sollier, Steffen Persvold, Helge Simonsen, Mario Lodde, Thomas Moen, Kai Arne Midjås, Einar Rustad, Goutam Debnath
-
Patent number: 11755494Abstract: Techniques for performing cache operations are provided. The techniques include for a memory access class, detecting a threshold number of instances in which cache lines in an exclusive state in a cache are changed to an invalid state or a shared state without being in a modified state; in response to the detecting, treating first coherence state agnostic requests for cache lines for the memory access class as requests for cache lines in a shared state; detecting a reset event for the memory access class; and in response to detecting the reset event, treating second coherence state agnostic requests for cache lines for the memory class as coherence state agnostic requests.Type: GrantFiled: October 29, 2021Date of Patent: September 12, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Paul J. Moyer