Hierarchical Caches Patents (Class 711/122)
-
Patent number: 12141071Abstract: Provided is a processor that includes a load and store unit (LSU) and a cache memory, and transfers data information from a store queue in the LSU to the cache memory. The cache memory requests an information packet from the LSU when the cache memory determines that an available entry exists in a store queue within the cache memory. The LSU acknowledges the request and transfers an information packet to the cache memory. The LSU anticipates that an additional available entry exists in the cache memory, transmits an additional acknowledgement to the cache memory, and transfers an additional information packet, before receiving an additional request from the cache memory.Type: GrantFiled: July 21, 2022Date of Patent: November 12, 2024Assignee: International Business Machnes CorporationInventors: Shakti Kapoor, Nelson Wu, Manoj Dusanapudi
-
Patent number: 12135649Abstract: In one aspect, a delta prefetcher disclosed herein bases its predictions on both past memory accesses and predictively prefetched memory accesses. More specifically, the delta prefetcher disclosed herein bases its prediction on both the difference or “delta” between memory addresses of data previously fetched from memory and the difference or “delta” between addresses of data predictively fetched from memory. The delta prefetcher tracks the delta memory accesses by utilizing two distinctive tables. The fetch table tracks the memory deltas for each memory operation, such as a LOAD or STORE instruction, that the CPU has executed. The delta table predicts the next memory address to prefetch based on the last prefetched memory accesses.Type: GrantFiled: February 14, 2023Date of Patent: November 5, 2024Assignee: QUALCOMM IncorporatedInventors: Ramkumar Srinivasan, Gerard Williams, Varun Palivela
-
Patent number: 12135655Abstract: Provided are a computer program product, system, and method for saving track metadata format information for tracks demoted from cache for use when the demoted track is later staged into cache. A track is demoted from the cache and indicated in a demoted track list. The track format information for the demoted track is saved, wherein the track format information indicates a layout of data in the track. The saved track format information for the demoted track is used when the demoted track is staged back into the cache.Type: GrantFiled: July 27, 2017Date of Patent: November 5, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
-
Patent number: 12117937Abstract: A microprocessor includes a virtually-indexed L1 data cache that has an allocation policy that permits multiple synonyms to be co-resident. Each L2 entry is uniquely identified by a set index and a way number. A store unit, during a store instruction execution, receives a store physical address proxy (PAP) for a store physical memory line address (PMLA) from an L1 entry hit upon by a store virtual address, and writes the store PAP to a store queue entry. The store PAP comprises the set index and the way number of an L2 entry that holds a line specified by the store PMLA. The store unit, during the store commit, reads the store PAP from the store queue, looks up the store PAP in the L1 to detect synonyms, writes the store data to one or more of the detected synonyms, and evicts the non-written detected synonyms.Type: GrantFiled: January 8, 2024Date of Patent: October 15, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 12093754Abstract: A processor includes a plurality of cores to which individual destination notification dedicated lines are coupled. A destination core selection circuit receives a plurality of packets having some of the cores as destinations, respectively, for arbitration, and selects one first core from among the cores serving as the destinations of the plurality of packets. A data matching circuit compares first transmission data included in a packet for the first core with second transmission data included in a packet for the core other than the first core serving as a destination to participate in the arbitration, extracts one or more second cores of which the second transmission data matches the first transmission data, designates the first core and the second core as the destinations by using the destination notification dedicated lines, and transmits the first transmission data via a data notification line.Type: GrantFiled: August 29, 2022Date of Patent: September 17, 2024Assignee: Fujitsu LimitedInventor: Tadafumi Akazawa
-
Patent number: 12093181Abstract: A technique for operating a cache is disclosed. The technique includes based on a workload change, identifying a first allocation permissions policy; operating the cache according to the first allocation permissions policy; based on set sampling, identifying a second allocation permissions policy; and operating the cache according to the second allocation permissions policy.Type: GrantFiled: June 28, 2022Date of Patent: September 17, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Chintan S. Patel, Alexander J. Branover, Benjamin Tsien, Edgar Munoz, Vydhyanathan Kalyanasundharam
-
Patent number: 12086064Abstract: An apparatus includes first CPU and second CPU cores, a L1 cache subsystem coupled to the first CPU core and comprising a L1 controller, and a L2 cache subsystem coupled to the L1 cache subsystem and to the second CPU core. The L2 cache subsystem includes a L2 memory and a L2 controller configured to operate in an aliased mode in response to a value in a memory map control register being asserted. In the aliased mode, the L2 controller receives a first request from the first CPU core directed to a virtual address in the L2 memory, receives a second request from the second CPU core directed to the virtual address in the L2 memory, directs the first request to a physical address A in the L2 memory, and directs the second request to a physical address B in the L2 memory.Type: GrantFiled: June 22, 2022Date of Patent: September 10, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Timothy David Anderson, Pramod Kumar Swami, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
-
Patent number: 12038884Abstract: An overlay network is augmented to provide more efficient data storage by processing a dataset of high dimension into an equivalent dataset of lower dimension, wherein the data reduction reduces the amount of actual physical data but not necessarily its informational value. Data to be processed (dimensionally-reduced) is received by an ingestion layer and supplied to a learning-based storage reduction application that implements the data reduction technique. The application applies a data reduction algorithm and stores the resulting dimensionally-reduced data sets in the native data storage or third party cloud. To recover the original higher-dimensional data, an associated reverse algorithm is implemented. In general, the application coverts an N dimensional data set to a K dimensional data set, where K<<N. The N dimensional dataset has a high dimension, and the K dimensional dataset has a low dimension.Type: GrantFiled: June 27, 2023Date of Patent: July 16, 2024Assignee: Akamai Technologies, Inc.Inventor: Indrajit Banerjee
-
Patent number: 12032931Abstract: Disclosed are compiling methods and apparatuses, where a compiling method includes receiving a single-core-based code and input data for an operation to be performed based on the single-core-based code, generating kernel clusters by performing graph clustering based on one or more operation kernels in the single-core-based code and the input data, and generating a multi-core-based code based on the kernel clusters.Type: GrantFiled: March 29, 2021Date of Patent: July 9, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Keunmo Park
-
Patent number: 12019560Abstract: Process isolation for a PIM device includes: receiving, from a process, a call to allocate a virtual address space where the process stores a PIM configuration context; allocating the virtual address space including mapping a physical address space including PIM device configuration registers to the virtual address space only if the physical address space is not mapped to another process's virtual address space; and programming the PIM device configuration space according to the configuration context. When a PIM command is executed, a translation mechanism determines whether there is a valid mapping of a virtual address of the PIM command to a physical address of a PIM resource, such as a LIS entry. If a valid mapping exists, the translation is completed and the resource is accessed, but if there is not a valid mapping, the translation fails and the process is blocked from accessing the PIM resource.Type: GrantFiled: December 20, 2021Date of Patent: June 25, 2024Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Sooraj Puthoor, Muhammad Amber Hassaan, Ashwin Aji, Michael L. Chu, Nuwan Jayasena
-
Patent number: 12013808Abstract: Embodiments are generally directed to a multi-tile architecture for graphics operations. An embodiment of an apparatus includes a multi-tile architecture for graphics operations including a multi-tile graphics processor, the multi-tile processor includes one or more dies; multiple processor tiles installed on the one or more dies; and a structure to interconnect the processor tiles on the one or more dies, wherein the structure to enable communications between processor tiles the processor tiles.Type: GrantFiled: March 14, 2020Date of Patent: June 18, 2024Assignee: INTEL CORPORATIONInventors: Altug Koker, Ben Ashbaugh, Scott Janus, Aravindh Anantaraman, Abhishek R. Appu, Niranjan Cooray, Varghese George, Arthur Hunter, Brent E. Insko, Elmoustapha Ould-Ahmed-Vall, Selvakumar Panneer, Vasanth Ranganathan, Joydeep Ray, Kamal Sinha, Lakshminarayanan Striramassarma, Prasoonkumar Surti, Saurabh Tangri
-
Patent number: 11995324Abstract: Disclosed are devices, methods, systems, media, and other implementations that include a method comprising accessing during execution of a process a memory element, determining whether data stored in the accessed memory element includes security data representative of locations that, if accessed, indicate a potential system violation condition, determining, in response to a determination that the accessed memory element includes the security data, whether execution of the process involves access of one or more memory locations in the accessed memory element containing the security data, and performing one or more remedial actions in response to a determination that the one or more memory locations in the memory element containing the security data are being accessed.Type: GrantFiled: January 16, 2020Date of Patent: May 28, 2024Assignee: The Trustees of Columbia University in the City of New YorkInventors: Lakshminarasimhan Sethumadhavan, Hiroshi Sasaki, Mohamed Tarek Bnziad Mohamed Hassan, Miguel A. Arroyo
-
Patent number: 11989128Abstract: A node of the computing environment obtains an exclusive fetch request of a cache line shared by, at least, the node and a manager node of the computing environment. The exclusive fetch request includes a state indication regarding processing of the exclusive fetch request by the manager node. The node processes the exclusive fetch request, based on the state indication included with the exclusive fetch request regarding processing of the exclusive fetch request by the manager node.Type: GrantFiled: December 15, 2022Date of Patent: May 21, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Winston Herring, Gregory William Alexander, Timothy Bronson, Jason D Kohl
-
Patent number: 11966284Abstract: A fault-tolerant computer system includes a plurality of processors configured to simultaneously execute identical sets of processor-executable instructions, each of the plurality of processors containing a processor core including one or more registers and a local memory, an arbiter configured to read each of the registers of the plurality of processors, detect incorrect register values, and overwrite the registers containing the incorrect register values with corrected register values, and a memory scrubber configured to read each address of the local memories of the plurality of processors, detect incorrect memory values, and overwrite addresses containing the incorrect memory values with corrected memory values. In various embodiments, the computer system may be implemented using one or more field programmable gate arrays (FPGAs).Type: GrantFiled: October 10, 2023Date of Patent: April 23, 2024Assignees: MONTANA STATE UNIVERSITY, RESILIENT COMPUTING, LLCInventors: Brock J. Lameres, Christopher Michel Major
-
Patent number: 11960400Abstract: A cache memory circuit capable of dealing with multiple conflicting requests to a given cache line is disclosed. In response to receiving an acquire request for the given cache line from a particular lower-level cache memory circuit, the cache memory circuit sends probe requests regarding the given cache line to other lower-level cache memory circuits. In situations where a different lower-level cache memory circuit is simultaneously trying to evict the given cache line at the particular lower-level cache memory circuit is trying to obtain a copy of the cache line, the cache memory circuit performs a series of operations to service both requests and ensure that the particular lower-level cache memory circuit receives a copy of the given cache line that includes any changes in the evicted copy of the given cache line.Type: GrantFiled: April 26, 2022Date of Patent: April 16, 2024Assignee: Cadence Design Systems, Inc.Inventors: Robert T. Golla, Matthew B. Smittle
-
Patent number: 11954037Abstract: A computing system includes a volatile memory, a cache coupled with the volatile memory, and a processing device coupled with the cache and at least one of a storage device or a network port. The processing device is to: generate a plurality of virtual addresses that are sequentially numbered for data that is to be at least one of processed or transferred in response to an input/output (I/O) request; allocate, for the data, a continuous range of physical addresses of the volatile memory; generate a set of hash-based values based on mappings between the plurality of virtual addresses and respective physical addresses of the continuous range of physical addresses; identify a unique cache line of the cache that corresponds to each respective hashed-based value of the set of hash-based values; and cause the data to be directly stored in the unique cache lines of the cache.Type: GrantFiled: February 10, 2022Date of Patent: April 9, 2024Assignee: NVIDIA CorporationInventors: Ankit Sharma, Shridhar Rasal
-
Patent number: 11947419Abstract: An operation method of a storage device includes: receiving a write request including an object identifier and data from an external device; performing a hash operation on the data to generate a hash value; determining whether an entry associated with the hash value is empty in a table; storing the data in an area of the storage device corresponding to a physical address and updating the entry to include the physical address and a reference count, when it is determined that the entry is empty; and increasing the reference count included in the entry without performing a store operation associated with the data, when it is determined that the entry is not empty, and an error message is returned to the external device when the entry associated with the hash value is not present in the table.Type: GrantFiled: November 29, 2021Date of Patent: April 2, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jooyoung Hwang
-
Patent number: 11947821Abstract: The present disclosure provides methods, systems, and non-transitory computer readable media for managing a primary storage unit of an accelerator. The methods include assessing activity of the accelerator; assigning, based on the assessed activity of the accelerator, a lease to a group of one or more pages of data on the primary storage unit, wherein the assigned lease indicates a lease duration; and marking, in response to the expiration of the lease duration indicated by the lease, the group of one or more pages of data as an eviction candidate.Type: GrantFiled: June 15, 2020Date of Patent: April 2, 2024Assignee: Alibaba Group Holding LimitedInventors: Yongbin Gu, Pengcheng Li, Tao Zhang, Yuan Xie
-
Patent number: 11940923Abstract: Technologies are described for cost based management of cache entries stored in a computer memory. In one example, a plurality of cache entries may be stored at a cache in a computer memory and the cache entries may have a cost measure associated with individual cache entries. A cost measure may represent a computing cost of an application to generate a cache entry. An incoming cache entry may be received at the cache, where the incoming cache entry has a cost measure associated with the incoming cache entry. In response to receiving the incoming cache entry, a cache entry that has a lower cost measure than the cost measure for other cache entries may be identified for eviction from the cache. The cache entry identified for eviction may be evicted from the cache, and the incoming cache entry may be written into the cache stored in the computer memory.Type: GrantFiled: December 11, 2019Date of Patent: March 26, 2024Assignee: Amazon Technologies, Inc.Inventors: Andrew Samnick, Jacob Shannan Carr, Sean Robert Connell
-
Patent number: 11940940Abstract: A processing device has a plurality of interfaces and a plurality of processors. During different phases of execution of a computer program, different processors are associated with different interfaces, such that the connectivity between processors and interfaces for the sending of egress data and the receiving of ingress data may change during execution of that computer program. The change in this connectivity is directed by the compiled code running on the processors. The compiled code selects which buses associated with which interfaces, given processors are to connect to for receipt of ingress data. Furthermore, the compiled code causes control messages to be sent to circuitry associated with the interfaces, so as to control which buses associated with which processors, given interfaces are to connect to.Type: GrantFiled: April 12, 2022Date of Patent: March 26, 2024Assignee: GRAPHCORE LIMITEDInventors: Daniel Wilkinson, Stephen Felix, Simon Knowles, Graham Cunningham, David Lacey
-
Patent number: 11940918Abstract: In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.Type: GrantFiled: February 13, 2023Date of Patent: March 26, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Timothy David Anderson, Kai Chirca, David Matthew Thompson
-
Patent number: 11935153Abstract: Data processing methods and devices are provided. A processing device comprises memory and a processor. The memory, which comprises a cache, is configured to store portions of data. The processor is configured to issue a store instruction to store one of the portions of data, provide identifying information associated with the one portion of data, compress the one portion of data; and store the compressed one portion of data across multiple lines of the cache using the identifying information. In an example, the one portion of data is a block of pixels and pixels and the processor is configured to request pixel data for a pixel of a compressed block of pixels, send additional requests for data for other pixels determined to belong to the compressed pixel block and provide an indication that the requests are for pixel data belonging to the compressed block of pixels.Type: GrantFiled: December 28, 2020Date of Patent: March 19, 2024Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Sergey Korobkov, Jimshed B. Mirza, Anthony Hung-Cheong Chan
-
Patent number: 11893423Abstract: A parallel processing unit (PPU) can be divided into partitions. Each partition is configured to operate similarly to how the entire PPU operates. A given partition includes a subset of the computational and memory resources associated with the entire PPU. Software that executes on a CPU partitions the PPU for an admin user. A guest user is assigned to a partition and can perform processing tasks within that partition in isolation from any other guest users assigned to any other partitions. Because the PPU can be divided into isolated partitions, multiple CPU processes can efficiently utilize PPU resources.Type: GrantFiled: September 5, 2019Date of Patent: February 6, 2024Assignee: NVIDIA CORPORATIONInventors: Jerome F. Duluk, Jr., Gregory Scott Palmer, Jonathon Stuart Ramsey Evans, Shailendra Singh, Samuel H. Duncan, Wishwesh Anil Gandhi, Lacky V. Shah, Sonata Gale Wen, Feiqi Su, James Leroy Deming, Alan Menezes, Pranav Vaidya, Praveen Joginipally, Timothy John Purcell, Manas Mandal
-
Patent number: 11886345Abstract: Configuring an address-to-SC unit (A2SU) of each of a plurality of CPU chips based on a number of valid SC chips in the computer system is disclosed. The A2SU is configured to correlate each of a plurality of memory addresses with a respective one of the valid SC chips. In response to detecting a change in the number of valid SC chips, pausing operation of the computer system including operation of a cache of each of the plurality of CPU chips; while operation of the computer system is paused, reconfiguring the A2SU in each of the plurality of CPU chips based on the change in the number of valid SC chips; and in response to reconfiguring the A2SU, resuming operation of the computer system.Type: GrantFiled: August 16, 2022Date of Patent: January 30, 2024Assignee: International Business Machines CorporationInventor: Burkhard Steinmacher-Burow
-
Patent number: 11868263Abstract: A microprocessor includes a virtually-indexed L1 data cache that has an allocation policy that permits multiple synonyms to be co-resident. Each L2 entry is uniquely identified by a set index and a way number. A store unit, during a store instruction execution, receives a store physical address proxy (PAP) for a store physical memory line address (PMLA) from an L1 entry hit upon by a store virtual address, and writes the store PAP to a store queue entry. The store PAP comprises the set index and the way number of an L2 entry that holds a line specified by the store PMLA. The store unit, during the store commit, reads the store PAP from the store queue, looks up the store PAP in the L1 to detect synonyms, writes the store data to one or more of the detected synonyms, and evicts the non-written detected synonyms.Type: GrantFiled: May 24, 2022Date of Patent: January 9, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 11868644Abstract: In one set of embodiments, a hardware module of a computer system can receive a stream of addresses corresponding to memory units being accessed by a central processing unit (CPU) of the computer system. The hardware module can further generate a frequency estimate for each address in the stream of addresses, the frequency estimate being indicative of a number of times a memory unit identified by the address has been accessed by the CPU, and can determine, based on the generated frequency estimates, a set of n most frequently accessed memory units.Type: GrantFiled: July 21, 2022Date of Patent: January 9, 2024Assignee: VMWARE, INC.Inventors: Andreas Georg Nowatzyk, Isam Wadih Akkawi, Pratap Subrahmanyam, Adarsh Seethanadi Nayak, Nishchay Dua
-
Patent number: 11868260Abstract: A method of caching large data objects of greater than 1 GB, comprising: populating a sharded cache with large data objects backfilled from a data store; servicing large data object requests from a plurality of worker nodes via the sharded cache, comprising deterministically addressing objects within the sharded cache; and if a number of requests for an object within a time exceeds a threshold: after receiving a request from a worker node for the object, sending the worker node a redirect message directed to a hot cache, wherein the hot cache is to backfill from a hot cache backfill, and wherein the hot cache backfill is to backfill from the sharded cache.Type: GrantFiled: December 23, 2021Date of Patent: January 9, 2024Assignee: GM CRUISE HOLDINGS LLCInventors: Hui Dai, Seth Alexander Bunce
-
Patent number: 11847008Abstract: Technologies for providing efficient detection of idle poll loops include a compute device. The compute device has a compute engine that includes a plurality of cores and a memory. The compute engine is to determine a ratio of unsuccessful operations to successful operations over a predefined time period of a core of the plurality cores that is assigned to continually poll, within the predefined time period, a memory address for a change in status and determine whether the determined ratio satisfies a reference ratio of unsuccessful operations to successful operations. The reference ratio is indicative of a change in the operation of the assigned core. The compute engine is further to selectively increase or decrease a power usage of the assigned core as a function of whether the determined ratio satisfies the reference ratio. Other embodiments are also described and claimed.Type: GrantFiled: April 12, 2018Date of Patent: December 19, 2023Assignee: Intel CorporationInventors: David Hunt, Niall Power, Kevin Devey, Changzheng Wei, Bruce Richardson, Eliezer Tamir, Andrew Cunningham, Chris MacNamara, Nemanja Marjanovic, Rory Sexton, John Browne
-
Patent number: 11797206Abstract: Embodiments for migrating hash values for backup data blocks in a network of data protection targets (DPTs) and a common data protection target implementing a Gold image library management system in which backups of Gold images used as templates for physical machines and virtual machines are stored on the CDPT. The CDPT and each DPT stores backup data split into chunks that are uniquely identified by a respective hash of its contents, and maintains data structures comprising the hash, chunk size, chunk data, and a list of DPT and CDPT identifiers. The hashes are partitioned into a set of buckets in the CDPT. A Bloom filter is generated for each bucket of hashes, and stored in each DPT so that each DPT stores Bloom filters for all CDPTs in the network. Each DPT checks its list of hashes against the Bloom filters in each of the DPTs to determine whether to keep or free chunks of data.Type: GrantFiled: March 12, 2021Date of Patent: October 24, 2023Assignee: EMC IP Holding Company LLCInventors: Arun Murti, Mark Malamut, Stephen Smaldone
-
Patent number: 11768777Abstract: Methods, apparatus, and processor-readable storage media for application aware cache management are provided herein. An example computer-implemented method includes maintaining a data structure comprising at least one entry indicative of an importance of at least one of a plurality of applications associated with a storage system; and controlling whether or not a particular data item requested by one of the plurality of applications is cached in a cache memory of the storage system based at least in part on the at least one entry of the data structure.Type: GrantFiled: January 22, 2020Date of Patent: September 26, 2023Assignee: EMC IP Holding Company LLCInventors: Jai P. Gahlot, Shiv S. Kumar
-
Patent number: 11748263Abstract: Improvements to internet cache protocols are disclosed. In certain embodiments, a client-facing proxy server can query peer servers to determine whether they have a copy of an object that the proxy server needs. The peer servers can respond based on object information that the peer servers stored about objects they have in cache, where the peers recorded such object information previously when ingesting the objects into their cache and stored it separately from the objects for fast access (e.g. in RAM vs. on disk). This information can be expressed in a compact way using just a few object flags, and enables the peer server to quickly respond and with detail about the status of objects they hold. The proxy server can make an intelligent decision about which peer to use, and indeed whether to use a peer at all.Type: GrantFiled: November 15, 2021Date of Patent: September 5, 2023Assignee: Akamai Technologies, Inc.Inventors: Dmitry Sotnikov, Denis Emelyanov, Dvir Tuberg, Arnon Shoshany, Michael Hakimi, Kfir Zigdon
-
Patent number: 11748262Abstract: Techniques for implementing an apparatus, which includes a memory system that provides data storage via multiple hierarchical memory levels, are provided. The memory system includes a cache that implements a first memory level and a memory array that implements a second memory level higher than the first memory level. Additionally, the memory system includes one or more memory controllers that determine a predicted data access pattern expected to occur during an upcoming control horizon, based at least in part on first context of first data to be stored in the memory sub-system, second context of second data previously stored in the memory system, or both, and control what one or more memory levels of the multiple hierarchical memory levels implemented in the memory system in which to store the first data, the second data, or both based at least in part on the predicted data access pattern.Type: GrantFiled: August 25, 2021Date of Patent: September 5, 2023Assignee: Micron Technology, Inc.Inventor: Anton Korzh
-
Patent number: 11724714Abstract: Systems and methods for autonomous vehicle state management are provided. An offboard state service can obtain a vehicle command associated with changing an onboard-defined state of an autonomous vehicle. Data indicative of the command can be provided to a computing system onboard the autonomous vehicle. The onboard computing system can initiate a vehicle action in accordance with the command and provide a command response indicative of an updated vehicle status for the onboard-defined state to the state service. In response, the state service can update a current status for the onboard-defined state as represented by a vehicle profile corresponding to the autonomous vehicle. The state service can generate an update event indicative of the vehicle state and the updated current status and publish the update event to a distributed event stream such that a number of devices subscribed to the event stream can receive data indicative of the update event.Type: GrantFiled: November 10, 2020Date of Patent: August 15, 2023Assignee: Uber Technologies, Inc.Inventor: Zelin Liu
-
Patent number: 11698893Abstract: In accordance with an embodiment, described herein is a system and method for use of lock-less data structures and processes with a multidimensional database computing environment. Lock-less algorithms or processes can be implemented with specific hardware-level instructions so as to provide atomicity. A memory stores an index cache retaining a plurality of index pages of the multidimensional database. A hash table indexes index pages in the index cache, wherein the hash table is accessible by a plurality of threads in parallel through application of the lock-less process.Type: GrantFiled: March 1, 2021Date of Patent: July 11, 2023Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Young Joon Kim, Vilas Ketkar, Shubhagam Gupta, Haritha Gongalore
-
Patent number: 11687497Abstract: An overlay network is augmented to provide more efficient data storage by processing a dataset of high dimension into an equivalent dataset of lower dimension, wherein the data reduction reduces the amount of actual physical data but not necessarily its informational value. Data to be processed (dimensionally-reduced) is received by an ingestion layer and supplied to a learning-based storage reduction application that implements the data reduction technique. The application applies a data reduction algorithm and stores the resulting dimensionally-reduced data sets in the native data storage or third party cloud. To recover the original higher-dimensional data, an associated reverse algorithm is implemented. In general, the application coverts an N dimensional data set to a K dimensional data set, where K<<N. The N dimensional dataset has a high dimension, and the K dimensional dataset has a low dimension.Type: GrantFiled: July 20, 2021Date of Patent: June 27, 2023Assignee: Akamai Technologies Inc.Inventor: Indrajit Banerjee
-
Patent number: 11687378Abstract: Embodiments include a multi-tenant cloud system that receives a request for an authenticate action for a user. Embodiments create an authenticate target action and register a cache listener for a cache that includes a filter to listen for a target action response that is responsive to the authenticate target action, the filter listing a plurality of bridges assigned to an on-premise active directory. Embodiments randomly select one of the plurality of bridges and sends the authenticate target action to the active directory via the selected bridge. Embodiments wait for a cache callback and, at the cache callback, receive a target action response that includes a result of the authenticate action.Type: GrantFiled: May 18, 2020Date of Patent: June 27, 2023Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Ashish Bhargava, Gary Cole, Gregg Wilson
-
Patent number: 11688122Abstract: An embodiment of an electronic processing system may include an application processor, persistent storage media communicatively coupled to the application processor, and a graphics subsystem communicatively coupled to the application processor. The system may include one or more of a draw call re-orderer communicatively coupled to the application processor and the graphics subsystem to re-order two or more draw calls, a workload re-orderer communicatively coupled to the application processor and the graphics subsystem to re-order two or more work items in an order independent mode, a queue primitive included in at least one of the two or more draw calls to define a producer stage and a consumer stage, and an order-independent executor communicatively coupled to the application processor and the graphics subsystem to provide tile-based order independent execution of a compute stage. Other embodiments are disclosed and claimed.Type: GrantFiled: February 2, 2022Date of Patent: June 27, 2023Assignee: Intel CorporationInventors: Devan Burke, Adam T. Lake, Jeffery S. Boles, John H. Feit, Karthik Vaidyanathan, Abhishek R. Appu, Joydeep Ray, Subramaniam Maiyuran, Altug Koker, Balaji Vembu, Murali Ramadoss, Prasoonkumar Surti, Eric J. Hoekstra, Gabor Liktor, Jonathan Kennedy, Slawomir Grajewski, Elmoustapha Ould-Ahmed-Vall
-
Patent number: 11653035Abstract: Systems, methods, and devices of the various embodiments disclosed herein may provide a protocol and architecture for decentralization of content delivery. Various embodiments may provide a client based method for content delivery from content delivery networks (CDNs) via tiered caches of content hosted by Internet Service Providers (ISPs). In various embodiments, content delivery protocol (CDP) messages may enable clients to discover local cache network topologies and request content from a CDN based on a discovered local cache network topology. In various embodiments, security may be provided for the content delivery by the use of key encryption and/or file hashing.Type: GrantFiled: April 30, 2021Date of Patent: May 16, 2023Assignee: Charter Communications Operating, LLCInventor: Jody Lee Beck
-
Patent number: 11641337Abstract: This document relates to a CDN balancing mitigation system. An implementing CDN can deploy systems and techniques to monitor the domains of content provider customers with an active DNS scanner and detect which are using other CDNs on the same domain. This information can be used as an input signal for identifying and implementing adjustments to CDN configuration. Both automated and semi-automated adjustments are possible. The system can issue configuration adjustments or recommendations to the implementing CDN's servers or to its personnel. These might include “above-SLA” treatments intended to divert traffic to the implementing CDN. The effectiveness can be measured with the multi-CDN balance subsequently observed. The scanning and adjustment workflow can be permanent, temporary, or cycled. Treatments may include a variety of things, such as more cache storage, routing to less loaded servers, and so forth.Type: GrantFiled: January 18, 2022Date of Patent: May 2, 2023Assignee: Akamai Technologies, Inc.Inventors: Martin T. Flack, Utkarsh Goel
-
Patent number: 11636115Abstract: A system comprises a data source storing data, a data processing unit (DPU) comprising an integrated circuit having programmable processor cores and a hardware-based regular expression (RegEx) engine, and a control node configured to generate a data flow graph for configuring the DPUs to execute the analytical operation to be performed on the data. The analytical operation specifies a query having at least one query predicate. A controller is configured to receive the data flow graph and, in response, configures the DPU to input the data as one or more data streams, and configure the RegEx engine to operate according to one or more deterministic finite automata (DFAs) or non-deterministic finite automata (NFAs) to evaluate the query predicate against the data by applying one or more regular expressions to the one or more data streams.Type: GrantFiled: September 26, 2019Date of Patent: April 25, 2023Assignee: FUNGIBLE, INC.Inventor: Satyanarayana Lakshmipathi Billa
-
Patent number: 11625333Abstract: Methods, systems, and devices for configurable flush operation speed are described. Before executing a flush operation at a first portion of a cache including single-level cells (SLCs), a memory system may communicate parameters associated with data stored in the first portion of the cache to a host system. The host system may then identify another portion of the cache (e.g., including either SLCs or multi-level cells (MLCs)) for the flush operation based on the parameters and a speed of a flush operation associated with the other portions of the cache. The host system may indicate the identified portion of the cache to the memory system and the memory system may execute a flush operation at the first portion of the cache. For example, the memory system may write a subset of the data stored at the first portion of the cache to a second portion of the cache.Type: GrantFiled: April 26, 2021Date of Patent: April 11, 2023Assignee: Micron Technology, Inc.Inventor: Giuseppe Cariello
-
Patent number: 11620060Abstract: Unified hardware and software two-level memory mechanisms and associated methods, systems, and software. Data is stored on near and far memory devices, wherein an access latency for a near memory device is less than an access latency for a far memory device. The near memory devices store data in data units having addresses in a near memory virtual address space, while the far memory devices store data in data units having addresses in a far memory address space, with a portion of the data being stored on both near and far memory devices. In response to memory read access requests, a determination is made to where data corresponding to the request is located on a near memory device, and if so the data is read from the near memory device; otherwise, the data is read from a far memory device. Memory access patterns are observed, and portions of far memory that are frequently accessed are copied to near memory to reduce access latency for subsequent accesses.Type: GrantFiled: December 28, 2018Date of Patent: April 4, 2023Assignee: Intel CorporationInventors: Mohan J. Kumar, Murugasamy K. Nachimuthu
-
Patent number: 11615024Abstract: A multiprocessor data processing system includes multiple vertical cache hierarchies supporting a plurality of processor cores, a system memory, and an interconnect fabric coupled to the system memory and the multiple vertical cache hierarchies. Based on a request of a requesting processor core among the plurality of processor cores, a master in the multiprocessor data processing system issues, via the interconnect fabric, a read-type memory access request. The master receives via the interconnect fabric at least one beat of conditional data issued speculatively on the interconnect fabric by a controller of the system memory prior to receipt by the controller of a systemwide coherence response for the read-type memory access request. The master forwards the at least one beat of conditional data to the requesting processor core.Type: GrantFiled: August 4, 2021Date of Patent: March 28, 2023Assignee: International Business Machines CorporationInventors: Derek E. Williams, Michael S. Siegel, Guy L. Guthrie, Bernard C. Drerup
-
Patent number: 11593276Abstract: A cache system includes a cache memory having a plurality of blocks, a dirty line list storing status information of a predetermined number of dirty lines among dirty lines in the plurality of blocks, and a cache controller controlling a data caching operation of the cache memory and providing statuses and variation of statuses of the dirty lines, according to the data caching operation, to the dirty line list. The cache controller performs a control operation to always store status information of a least-recently-used (LRU) dirty line into a predetermined storage location of the dirty line list.Type: GrantFiled: January 26, 2022Date of Patent: February 28, 2023Assignee: SK hynix Inc.Inventors: Seung Gyu Jeong, Dong Gun Kim
-
Patent number: 11586539Abstract: A processing system selectively allocates space to store a group of one or more cache lines at a cache level of a cache hierarchy having a plurality of cache levels based on memory access patterns of a software application executing at the processing system. The processing system generates bit vectors indicating which cache levels are to allocate space to store groups of one or more cache lines based on the memory access patterns, which are derived from data granularity and movement information. Based on the bit vectors, the processing system provides hints to the cache hierarchy indicating the lowest cache level that can exploit the reuse potential for a particular data.Type: GrantFiled: December 13, 2019Date of Patent: February 21, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Weon Taek Na, Jagadish B. Kotra, Yasuko Eckert, Steven Raasch, Sergey Blagodurov
-
Patent number: 11567971Abstract: A method of processing data in a system having a host and a storage node may include performing a shuffle operation on data stored at the storage node, wherein the shuffle operation may include performing a shuffle write operation, and performing a shuffle read operation, wherein at least a portion of the shuffle operation is performed by an accelerator at the storage node. A method for partitioning data may include sampling, at a device, data from one or more partitions based on a number of samples, transferring the sampled data from the device to a host, determining, at the host, one or more splitters based on the sampled data, communicating the one or more splitters from the host to the device, and partitioning, at the device, data for the one or more partitions based on the one or more splitters.Type: GrantFiled: December 4, 2020Date of Patent: January 31, 2023Inventors: Hui Zhang, Joo Hwan Lee, Yiqun Zhang, Armin Haj Aboutalebi, Xiaodong Zhao, Praveen Krishnamoorthy, Andrew Chang, Yang Seok Ki
-
Patent number: 11550723Abstract: An apparatus, method, and system for memory bandwidth aware data prefetching is presented. The method may comprise monitoring a number of request responses received in an interval at a current prefetch request generation rate, comparing the number of request responses received in the interval to at least a first threshold, and adjusting the current prefetch request generation rate to an updated prefetch request generation rate by selecting the updated prefetch request generation rate from a plurality of prefetch request generation rates, based on the comparison. The request responses may be NACK or RETRY responses. The method may further comprise either retaining a current prefetch request generation rate or selecting a maximum prefetch request generation rate as the updated prefetch request generation rate in response to an indication that prefetching is accurate.Type: GrantFiled: August 27, 2018Date of Patent: January 10, 2023Assignee: Qualcomm IncorporatedInventors: Niket Choudhary, David Scott Ray, Thomas Philip Speier, Eric Robinson, Harold Wade Cain, III, Nikhil Narendradev Sharma, Joseph Gerald McDonald, Brian Michael Stempel, Garrett Michael Drapala
-
Patent number: 11507441Abstract: A method of performing a remotely-initiated procedure on a computing device is provided. The method includes (a) receiving, by memory of the computing device, a request from a remote device via remote direct memory access (RDMA); (b) in response to receiving the request, assigning processing of the request to one core of a plurality of processing cores of the computing device, wherein assigning includes the one core receiving a completion signal from a shared completion queue (Shared CQ) of the computing device, the Shared CQ being shared between the plurality of cores; and (c) in response to assigning, performing, by the one core, a procedure described by the request. An apparatus, system, and computer program product for performing a similar method are also provided.Type: GrantFiled: January 21, 2021Date of Patent: November 22, 2022Assignee: EMC IP Holding Company LLCInventors: Leonid Ravich, Yuri Chernyavsky
-
Patent number: 11507420Abstract: Systems and methods for scheduling tasks using sliding time windows are provided. In certain embodiments, a system for scheduling the execution of tasks includes at least one processing unit configured to execute multiple tasks, wherein each task in the multiple tasks is scheduled to execute within a scheduler instance in multiple scheduler instances, each scheduler instance in the multiple scheduler instances being associated with a set of time windows in multiple time windows and with a set of processing units in the at least one processing unit in each time window, time windows in the plurality of time windows having a start time and an allotted duration and the scheduler instance associated with the time windows begins executing associated tasks no earlier than the start time and executes for no longer than the allotted duration, and wherein the start time is slidable to earlier moments in time.Type: GrantFiled: September 8, 2020Date of Patent: November 22, 2022Assignee: Honeywell International Inc.Inventors: Srivatsan Varadarajan, Larry James Miller, Arthur Kirk McCready, Aaron R. Larson, Richard Frost, Ryan Lawrence Roffelsen
-
Patent number: 11481598Abstract: A computer-implemented method for creating an auto-scaled predictive analytics model includes determining, via a processor, whether a queue size of a service master queue is greater than zero. Responsive to determining that the queue size is greater than zero, the processor fetches a count of requests in a plurality of requests in the service master queue and a type for each of the requests. The processor derives a value for time required for each of the requests and retrieves a number of available processing nodes based on the time required for each of the requests. The processor then auto-scales a processing node number responsive to determining that a total execution time for all of the requests in the plurality of requests exceeds a predetermined time value and outputs an auto-scaled predictive analytics model based on the processing node number and queue size.Type: GrantFiled: November 27, 2017Date of Patent: October 25, 2022Assignee: International Business Machines CorporationInventors: Mahadev Khapali, Shashank V. Vagarali