Multiple Caches Patents (Class 711/119)
-
Patent number: 12164385Abstract: An assigned subgroup that includes a plurality of entries is traversed by a prefetcher. It is determined that an expected number of entries associated with the assigned subgroup have been traversed. In response to determining that expected number of entries associated with the assigned subgroup have been traversed, it is determined that a last read entry associated with the assigned subgroup does not correspond to a last entry associated with the assigned subgroup. The prefetcher is preempted by stopping the prefetcher from obtaining a list of entries associated with a remaining portion of the assigned subgroup.Type: GrantFiled: January 17, 2024Date of Patent: December 10, 2024Assignee: Cohesity, Inc.Inventors: Amandeep Gautam, Venkata Ranga Radhanikanth Guturi
-
Patent number: 12111762Abstract: An embodiment of an integrated circuit may comprise a core, and a cache controller coupled to the core, the cache controller including circuitry to identify data from a working set for dynamic inclusion in a next level cache based on an amount of re-use of the next level cache, send a shared copy of the identified data to a requesting core of one or more processor cores, and maintain a copy of the identified data in the next level cache. Other embodiments are disclosed and claimed.Type: GrantFiled: December 22, 2020Date of Patent: October 8, 2024Assignee: Intel CorporationInventors: Ayan Mandal, Leon Polishuk, Oz Shitrit, Joseph Nuzman
-
Patent number: 12099400Abstract: This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger.Type: GrantFiled: February 6, 2023Date of Patent: September 24, 2024Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Joseph Zbiciak, Timothy D. Anderson, Duc Bui, Kai Chirca
-
Patent number: 12099443Abstract: Techniques are provided for implementing and managing a multi-modal write cache for a data storage system. For example, a storage control system is configured to perform a write caching method which comprises the storage control system receiving an input/output (I/O) write request from a client application to write data to a primary storage volume, comparing a current I/O workload associated with the client application to an I/O workload threshold, and writing the data of the I/O write request to one of (i) a persistent write cache in a persistent storage volume and (ii) a non-persistent write cache in a non-persistent storage volume, based at least in part on a result of comparing the current I/O workload to the I/O workload threshold.Type: GrantFiled: July 13, 2023Date of Patent: September 24, 2024Assignee: Dell Products L.P.Inventors: Doron Tal, Yosef Shatsky
-
Patent number: 12086066Abstract: A cache architecture for an array of identical cores arranged in a grid. Each of the cores include interconnections to neighboring cores in the grid, a memory, and an algorithmic logic unit. A first core of the array is configured to receive a memory access request for data from at least one core of the array of cores configured to perform a computational operation. A second core of the array is configured to determine whether the requested data is present in a cache memory via a cache index including addresses in the cache memory. A third core of the array is configured as the cache memory. The memory of the third core is used as the cache memory. An address of the requested data from the cache index is passed to the third core to output the requested data.Type: GrantFiled: March 15, 2023Date of Patent: September 10, 2024Assignee: Cornami, Inc.Inventor: Martin Alan Franz, II
-
Patent number: 12061551Abstract: An access counter associated with a segment of a memory device is maintained. An access notification for a first line of the segment is received. An access type associated with the access notification is identified. A first value of the access counter is changed by a second value based on the access type. Based on the first value of the access counter, a memory management scheme is implemented.Type: GrantFiled: August 26, 2022Date of Patent: August 13, 2024Assignee: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Patent number: 12045498Abstract: Disclosed are a solid state drive and a write operation method. The solid state drive comprises: a controller, receiving write data from outside and comprising a first cache unit for storing the write data; a Flash memory, receiving the write data sent by the first cache unit according to a first instruction of the controller; a second cache unit, storing the write data from the first cache unit as backup data, and sending the backup data to the Flash memory according to a second instruction of the controller. The second instruction is obtained after the write data fails to be written into the Flash memory under the first instruction, so that the backup data can continue to be called if write operation fails. By combining advantages of the first and second cache units, efficiency and quality of write operations are improved and bandwidth requirements are lowered.Type: GrantFiled: April 25, 2022Date of Patent: July 23, 2024Assignee: MAXIO TECHNOLOGY (HANGZHOU) CO., LTD.Inventors: Wei Xu, Zihua Xiao, Hui Jiang, Zhengliang Chen
-
Patent number: 12047477Abstract: A network device includes a statelet storage storing statelets that retain state information associated with a packet flow through the network device and that the network device can interact with to control processing performed on packets of the data flow. The network device implements a set of instructions that interpret commands in the data packets to manage and interact with statelets. The statelets in the statelet storage are organized by a statelet key that is derived from information identifying the packet flow. Responsive to the commands in the packets, the network device can create, read, write, or delete statelets from the statelet storage. The statelet storage includes multiple statelets each statelet including multiple fields. The network device may access the statelets to control/monitor a packet flow using information in a network data plane without receiving control information from a network control plane.Type: GrantFiled: August 31, 2020Date of Patent: July 23, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Alexander Clemm, Uma S. Chunduri, Renwei Li
-
Patent number: 12038901Abstract: Methods and system for a database management system (DBMS) in which a leader thread is elected from concurrent transaction threads stored in one or more data nodes. While the leader thread copies its own thread transaction log onto a reserved portion of the shared log buffer, the leader thread permits other transaction threads to attach to a thread chain starting with the leader thread. Once the leader has completed copying its thread transaction log onto the shared log buffer, it then reserves a portion of the shared log buffer, and copies the member thread transaction logs onto the shared log buffer to reduce the contention for shared buffer may be reduced.Type: GrantFiled: February 7, 2022Date of Patent: July 16, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Ronen Grosman, Ping Chen
-
Patent number: 12038860Abstract: Systems, methods and computer software are disclosed for fronthaul. In one embodiment a method is disclosed, comprising: providing a virtual Radio Access Network (vRAN) having a centralized unit (CU) and a distributed unit (DU); and interconnecting the CU and DU over an Input/Output (I/O) bus using Peripheral Component Interconnect-Express (PCIe); wherein the CU and the DU include a PCI to optical converter and an optical to PCI converter.Type: GrantFiled: January 3, 2023Date of Patent: July 16, 2024Assignee: Parallel Wireless, Inc.Inventors: Ofir Ben Ari Katzav, David Johnston, Steven Paul Papa
-
Patent number: 12038913Abstract: Disclosed is a method for managing a database, which is performed by a first database server including at least one processor constituting a cluster jointly with at least one second database server. The method for managing a database may include loading, on a buffer cache, a first data block based on a first transaction for modifying the first data block located in a sharing storage shared jointly with the at least one second database server. The method may include modifying the first data block loaded on the buffer cache. The method may include determining flushing a first log generated by the modification of the first data block to the sharing storage.Type: GrantFiled: October 26, 2022Date of Patent: July 16, 2024Assignee: TmaxTibero Co., Ltd.Inventors: Jaemin Oh, Hakju Kim, Dongyun Yang, Sangyoung Park
-
Patent number: 12032683Abstract: Log entries and baseline log entries have timestamps, and can be structured over columns of respective data types. Temporal inconsistency can be identified by comparing a probability distribution of time differences between the timestamps of the log entries with a probability distribution of time differences between the timestamps of the baseline log entries. Data type inconsistency can be identified by comparing a data type of each column of the log entries with a data type of a corresponding column of the baseline log entries. Columnar inconsistency can be identified by comparing a number of the columns of the log entries with a number of the columns of the baseline log entries. In response to identification of temporal, data type, and/or columnar inconsistency, that an abnormality exists in collecting the log entries is detected.Type: GrantFiled: July 29, 2021Date of Patent: July 9, 2024Assignee: Micro Focus LLCInventors: Manish Marwah, Martin Arlitt
-
Patent number: 12014206Abstract: A method includes receiving, by a first stage in a pipeline, a first transaction from a previous stage in pipeline; in response to first transaction comprising a high priority transaction, processing high priority transaction by sending high priority transaction to a buffer; receiving a second transaction from previous stage; in response to second transaction comprising a low priority transaction, processing low priority transaction by monitoring a full signal from buffer while sending low priority transaction to buffer; in response to full signal asserted and no high priority transaction being available from previous stage, pausing processing of low priority transaction; in response to full signal asserted and a high priority transaction being available from previous stage, stopping processing of low priority transaction and processing high priority transaction; and in response to full signal being de-asserted, processing low priority transaction by sending low priority transaction to buffer.Type: GrantFiled: October 3, 2022Date of Patent: June 18, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson
-
Patent number: 11977486Abstract: A computer system includes a processor core and a memory system in signal communication with the processor core. The memory system includes a first cache and a second cache. The first cache is arranged at a first level of a hierarchy in the memory system and is configured to store a plurality of first-cache entries. The second cache is arranged at a second level of the hierarchy that is lower than the first level, and stores a plurality of second-cache entries. The first cache maintains a directory that contains information for each of the first-cache entries. The second cache maintains a shadow pointer directory (SPD) that includes one or more SPD entries that maps each of the first-cache entries to a corresponding second cache entry at a lower-level cache location.Type: GrantFiled: April 4, 2022Date of Patent: May 7, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ashraf ElSharif, Richard Joseph Branciforte, Gregory William Alexander, Deanna Postles Dunn Berger, Timothy Bronson, Aaron Tsai, Taylor J. Pritchard, Markus Kaltenbach, Christian Jacobi, Michael A. Blake
-
Patent number: 11971855Abstract: Methods, apparatus, and processor-readable storage media for supporting multiple operations in transaction logging for a cloud enabled file system are provided herein. An example computer-implemented method includes obtaining a plurality of file system operations to be performed on a cloud enabled file system; executing the plurality of file system operations as a single file system transaction; and maintaining a transaction log for the single transaction, the transaction log comprising information for one or more sub-transactions that were completed in conjunction with said executing, wherein the one or more sub-transactions correspond to at least a portion of the plurality of file system operations.Type: GrantFiled: May 19, 2020Date of Patent: April 30, 2024Assignee: EMC IP Holding Company LLCInventor: Priyamrita Ghosh
-
Patent number: 11972034Abstract: A computer system and associated methods are disclosed for mitigating side-channel attacks using a shared cache. The computer system includes a host having a main memory and a shared cache. The host executes a virtual machine manager (VMM) that determines respective security keys for a plurality of co-located virtual machines (VMs). A cache controller for the shared cache includes a scrambling function that scrambles addresses of memory accesses performed by threads of the VMs according to the respective security keys. Different cache tiers may implement different scrambling functions optimized to the architecture of each cache tier. Security keys may be periodically updated to further reduce predictability of shared cache to memory address mappings.Type: GrantFiled: October 29, 2020Date of Patent: April 30, 2024Assignee: Amazon Technologies, Inc.Inventors: Martin Pohlack, Pawel Wieczorkiewicz, Uwe Dannowski
-
Patent number: 11966398Abstract: A method for storing video data includes, when receiving the I-frame data to be stored, detecting whether the written data exists in the video cache space; when detecting that the written data exists in the video cache space, reading a target writing position of the I-frame data to be stored and determining whether the target writing position is located within a position range corresponding to the written data in the first cache space; when determining the target writing position is located within the position range, writing, based on the target writing position, the I-frame data to be stored to the first cache space for caching and detecting whether the first cache space is full; and when detecting that the first cache space is full, writing all the video data in the video cache space to a memory space of the terminal device for storage and emptying the video cache space.Type: GrantFiled: October 11, 2019Date of Patent: April 23, 2024Assignee: ZHEJIANG UNIVIEW TECHNOLOGIES CO., LTD.Inventors: Zuohua Wu, Qiang Ding
-
Patent number: 11966385Abstract: In various examples, there is provided a computer-implemented method for writing transaction log entries to a transaction log for a database system. At least part of the database system is configured to be executed within a trusted execution environment. The transaction log is stored outside of the trusted execution environment. The method maintains a first secure count representing a number of transaction log entries which have been written to the transaction log for transactions which have been committed to the database and writes a transaction log entry to the transaction log. In other examples, there is also provided is a computer-implemented method for restoring a database system using transaction log entries received from the transaction log and a current value of the first secure count.Type: GrantFiled: August 25, 2021Date of Patent: April 23, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Christian Priebe, Kapil Vaswani, Manuel Silverio da Silva Costa
-
Patent number: 11954022Abstract: Provided are a storage device, system, and method for throttling host writes in a host buffer to a storage device. The storage device is coupled to a host system having a host buffer that includes reads and writes to pages of the storage device. Garbage collection consolidates valid data from pages in the storage device to fewer pages. A determination is made as to whether a processing measurement at the storage device satisfies a threshold. A timer value is set to a positive value in response to determining that the processing measurement satisfies the threshold. The timer is started to run for the timer value. Writes from the host buffer are blocked while the timer is running. Writes remain in the host buffer while the timer is running. A write is accepted from the host buffer to process in response to expiration of the timer.Type: GrantFiled: March 2, 2022Date of Patent: April 9, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthew S. Reuter, Timothy J. Fisher, Aaron Daniel Fry, Jenny L. Brown, John Carrington Cates, Austin Eberle
-
Patent number: 11954028Abstract: There is disclosed a method of storing an encoded block of data in memory comprising encoding a block of data elements and determining a memory location (26) at which the encoded block of data is to be stored. The memory location (26) at which the encoded block of data is stored is then indicated in a header (406) for the encoded block of data by including in the header a memory address value (407) together with a modifier value (500) representing a modifier that is to be applied to the memory address value (407) when determining the memory location (26). When the encoded block of data is to be retrieved, the header (406) is read and processed to determine the memory location (26).Type: GrantFiled: March 31, 2022Date of Patent: April 9, 2024Assignee: Arm LimitedInventors: Edvard Fielding, Jian Wang, Jakob Axel Fries, Carmelo Giliberto
-
Patent number: 11947581Abstract: A plurality of personalized news feeds are generated from input feeds including digital content items based on a dynamic taxonomy data structure. Entities are extracted from the input feeds and relationship strengths are obtained for the extracted entities and the digital content items. The dynamic taxonomy data structure is updated with the extracted entities and entries for the digital content news items are included at the corresponding branches based on the relationship strengths. Attributes are obtained for the entities and those entities corresponding to the trending topics are identified. Personalized news feeds are generated including the digital content items listed under the entities. Digital content items are added or removed from the digital content feeds based on one or more entity attributes.Type: GrantFiled: May 18, 2021Date of Patent: April 2, 2024Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Srikanth G Rao, Tarun Singhal, Mathangi Sandilya, Issac Abraham Alummoottil, Raja Sekhar Velagapudi, Rahel James Kale, Ankur Garg, Jayaprakash Nooji Shekar, Omkar Sudhakar Deorukhkar, Veera Raghavan Valayaputhur
-
Patent number: 11940911Abstract: Techniques are provided for implementing a persistent key-value store for caching client data, journaling, and/or crash recovery. The persistent key-value store may be hosted as a primary cache that provides read and write access to key-value record pairs stored within the persistent key-value store. The key-value record pairs are stored within multiple chains in the persistent key-value store. Journaling is provided for the persistent key-value store such that incoming key-value record pairs are stored within active chains, and data within frozen chains is written in a distributed manner across distributed storage of a distributed cluster of nodes. If there is a failure within the distributed cluster of nodes, then the persistent key-value store may be reconstructed and used for crash recovery.Type: GrantFiled: December 17, 2021Date of Patent: March 26, 2024Assignee: NetApp, Inc.Inventors: Sudheer Kumar Vavilapalli, Asif Imtiyaz Pathan, Parag Sarfare, Nikhil Mattankot, Stephen Wu, Amit Borase
-
Patent number: 11934307Abstract: An apparatus and method are provided for receiving a request from a plurality of processing units, where multiple of those processing units have associated cache storage. A snoop unit is used to implement a cache coherency protocol when a request is received that identifies a cacheable memory address. The snoop unit has snoop filter storage comprising a plurality of snoop filter tables organized in a hierarchical arrangement. The snoop filter tables comprise a primary snoop filter table at a highest level in the hierarchy, and each snoop filter table at a lower level in the hierarchy forms a backup snoop filter table for an adjacent snoop filter table at a higher level in the hierarchy. Each snoop filter table is arranged as a multi-way set associative storage structure, and each backup snoop filter table has a different number of sets than are provided in the adjacent snoop filter table.Type: GrantFiled: January 18, 2021Date of Patent: March 19, 2024Assignee: Arm LimitedInventors: Joshua Randall, Jesse Garrett Beu
-
Patent number: 11937164Abstract: A method for processing a data packet at a node in a Bluetooth Mesh network, comprising: (a) determining a one-hop device cache list of the node, wherein the one-hop device cache list comprises an address of one or more one-hop nodes; (b) when the node sends a data packet, checking whether a destination address of the data packet is the same as an address stored in the one-hop device cache list; if yes, setting a TTL value of the data packet to 0 and sending the data packet; otherwise, setting the TTL value of the data packet to be greater than a specified TTL threshold, and sending the data packet; and (c) when the node forwards a data packet, checking whether the destination address of the data packet is the same as an address stored in the one-hop device cache list; if yes, setting the TTL value of the data packet to 1 and forwarding the data packet; otherwise, deducting the TTL value of the data packet by 1 and forwarding the data packet.Type: GrantFiled: February 25, 2020Date of Patent: March 19, 2024Assignee: ESPRESSIF SYSTEMS (SHANGHAI) CO., LTD.Inventors: Yizan Zhou, Swee Ann Teo
-
Patent number: 11928352Abstract: Systems and methods are described for performing persistent inflight tracking of operations (Ops) within a cross-site storage solution. According to one embodiment, a method comprises maintaining state information regarding a data synchronous replication status for a first storage object of a primary storage cluster and a second storage object of a secondary storage cluster. The state information facilitates automatic triggering of resynchronization for data replication between the first storage object and the second storage object. The method includes performing persistent inflight tracking of I/O operations with a first Op log of the primary storage cluster and a second Op log of the secondary storage cluster, establishing and comparing Op ranges for the first and second Op logs, and determining a relation between the Op range of the first Op log and the Op range of the second Op log to prevent divergence of Ops in the first and second Op logs and to support parallel split of the Ops.Type: GrantFiled: October 26, 2021Date of Patent: March 12, 2024Assignee: NetApp, Inc.Inventors: Krishna Murthy Chandraiah Setty Narasingarayanapeta, Preetham Shenoy, Divya Kathiresan, Rakesh Bhargava
-
Patent number: 11907528Abstract: Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command.Type: GrantFiled: July 20, 2021Date of Patent: February 20, 2024Assignee: Texas Instruments IncorporatedInventors: Kai Chirca, Daniel Wu, Matthew David Pierson
-
Patent number: 11908546Abstract: A system includes a plurality of host processors and a plurality of HMC devices configured as a distributed shared memory for the host processors. An HMC device includes a plurality of integrated circuit memory die including at least a first memory die arranged on top of a second memory die and at least a portion of the memory of the memory die is mapped to include at least a portion of a memory coherence directory; and a logic base die including at least one memory controller configured to manage three-dimensional (3D) access to memory of the plurality of memory die by at least one second device, and logic circuitry configured to determine memory coherence state information for data stored in the memory of the plurality of memory die, communicate information regarding the access to memory, and include the memory coherence information in the communicated information.Type: GrantFiled: October 8, 2020Date of Patent: February 20, 2024Assignee: Micron Technology, Inc.Inventor: Richard C Murphy
-
Patent number: 11899937Abstract: Systems and methods of a memory allocation buffer to reduce heap fragmentation. In one embodiment, the memory allocation buffer structures a memory arena dedicated to a target region that is one of a plurality of regions in a server in a database cluster such as an HBase cluster. The memory area has a chunk size (e.g., 2 MB) and an offset pointer. Data objects in write requests targeted to the region are received and inserted to the memory arena at a location specified by the offset pointer. When the memory arena is filled, a new one is allocated. When a MemStore of the target region is flushed, the entire memory arenas for the target region are freed up. This reduces heap fragmentation that is responsible for long and/or frequent garbage collection pauses.Type: GrantFiled: March 3, 2020Date of Patent: February 13, 2024Assignee: Cloudera, Inc.Inventor: Todd Lipcon
-
Patent number: 11893062Abstract: Technologies described herein can be used for the bulk lazy loading of structured data from a database. A request can be received to initialize an application data structure (such as a structured data object, a hierarchical data structure, an object graph, etc.). The data structure can be analyzed to identify a plurality of child objects of the data structure. Database records associated with the plurality of child objects can then be identified. A loaded child record table can be inspected to determine which of the identified database records are not stored in a cache. A request can be generated, comprising one or more queries to retrieve the uncached subset of database records from the database. Once the uncached subset of records are received from the database, these records can be used, along with the cached subset of the identified database records, to initialize the plurality of child objects of the application data structure.Type: GrantFiled: May 14, 2019Date of Patent: February 6, 2024Assignee: SAP SEInventors: Frank Emminghaus, Wendeng Li, Zhijie Ai
-
Patent number: 11892955Abstract: System and method for analyzing CXL flits at read bypass detection logic to identify bypass memory read requests and transmitting the identified bypass memory read requests over a read request bypass path directly to a transaction/application layer of the CXL memory controller, wherein the read request bypass path does not include an arbitration/multiplexing layer and a link layer of the CXL memory controller, thereby reducing the latency inherent in a CXL memory controller.Type: GrantFiled: May 10, 2022Date of Patent: February 6, 2024Assignee: Microchip Technology Inc.Inventors: Sanjay Goyal, Larrie Simon Carr, Patrick Bailey
-
Patent number: 11888938Abstract: Systems and methods for optimizing distributed computing systems are disclosed, such as for processing raw data from data sources (e.g., structured, semi-structured, key-value paired, etc.) in applications of big data. A process for utilizing multiple processing cores for data processing can include receiving raw input data and a first portion of digested input data from a data source client through an input/output bus at a first processor core, receiving, from the first processor core, the raw input data and first portion of digested input data by a second processor core, digesting the received raw input data by the second processor core to create a second portion of digested input data, receiving the second portion of digested input data by the first processor core, and writing, by the first processor core, the first portion of digested input data and the second portion of digested input data to a storage medium.Type: GrantFiled: July 29, 2022Date of Patent: January 30, 2024Assignee: Elasticflash, Inc.Inventors: Darshan Bharatkumar Rawal, Pradeep Jnana Madhavarapu, Naoki Iwakami
-
Patent number: 11880350Abstract: Resource lock ownership identification is provided across processes and systems of a clustered computing environment by a method which includes saving by a process, based on a user acquiring a resource lock, a lock information record to a shared data structure of the clustered computing environment. The lock information record includes user identification data identifying the user-owner of the resource lock acquired on a system executing the process. The method also includes referencing, by another process, the lock information record of the shared data structure to ascertain the user identification data identifying the user-owner of the resource lock, and thereby facilitate processing within the clustered computing environment.Type: GrantFiled: June 8, 2021Date of Patent: January 23, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David Kenneth McKnight, Yichong Zhang, Dung Thi Tang, Onno Van den Troost
-
Patent number: 11876677Abstract: Some embodiments of the invention provide a method for WAN (wide area network) optimization for a WAN that connects multiple sites, each of which has at least one router. At a gateway router deployed to a public cloud, the method receives from at least two routers at least two sites, multiple data streams destined for a particular centralized datacenter. The method performs a WAN optimization operation to aggregate the multiple streams into one outbound stream that is WAN optimized for forwarding to the particular centralized datacenter. The method then forwards the WAN-optimized data stream to the particular centralized datacenter.Type: GrantFiled: December 6, 2022Date of Patent: January 16, 2024Assignee: VMware LLCInventors: Igor Golikov, Aran Bergman, Lior Gal, Avishay Yanai, Israel Cidon, Alex Markuze, Eyal Zohar
-
Patent number: 11868254Abstract: An electronic device includes a cache, a memory, and a controller. The controller stores an epoch counter value in metadata for a location in the memory when a cache block evicted from the cache is stored in the location. The controller also controls how the cache block is retained in the cache based at least in part on the epoch counter value when the cache block is subsequently retrieved from the location and stored in the cache.Type: GrantFiled: September 30, 2021Date of Patent: January 9, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Nuwan Jayasena
-
Patent number: 11868613Abstract: A method includes defining a plurality of data storage policies, each of the plurality of data storage policies providing rules for storing data among a plurality of data storage locations, each of the plurality of data storage locations having a data storage cost and a data retrieval cost associated therewith; determining a baseline policy distribution among the plurality of data storage policies for an entity; receiving new data items corresponding to the entity; storing the new data items in the plurality of data storage locations using the plurality of data storage policies based on the baseline policy distribution; and determining, using the artificial intelligence engine, a selected one of the plurality of data storage policies to use in storing the new data items corresponding to the entity based on the data storage cost for each of the plurality of data storage locations, and the data retrieval cost for each of the plurality of storage locations.Type: GrantFiled: January 15, 2021Date of Patent: January 9, 2024Assignee: CHANGE HEALTHCARE HOLDINGS LLCInventors: Philippe Raffy, Jean-Francois Pambrun, David Dubois, Ashish Kumar
-
Patent number: 11863623Abstract: Storage devices and systems are capable of dynamically managing QoS requirements associated with host applications via a management interface. The management interface may the enable storage devices to: (i) decide which data needs to be transferred back to the hosts, (ii) choose to skip portions of the data transferred back to the hosts to improve throughput and maintain low cost, and (iii) operate contention resolutions with host applications. Furthermore, storage devices and systems may achieve a virtual throughput that may be greater than its actual physical throughput. The management interface may also be operated at an application level, which advantageously allows the devices and systems the capabilities of managing contention resolutions of host applications, and managing (changing, observing, fetching, etc.) one or more QoS requirements for each host application.Type: GrantFiled: February 24, 2021Date of Patent: January 2, 2024Assignee: Western Digital Technologies, Inc.Inventors: Dinesh Kumar Agarwal, Amit Sharma
-
Patent number: 11855898Abstract: Methods, non-transitory computer readable media, network traffic management apparatuses, and network traffic management systems include inspecting a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.Type: GrantFiled: March 14, 2019Date of Patent: December 26, 2023Assignee: F5, Inc.Inventor: William Ross Baumann
-
Patent number: 11847057Abstract: Disclosed herein are system, method, and computer program product embodiments for utilizing an extended cache to access an object store efficiently. An embodiment operates by executing a database transaction, thereby causing pages to be written from a buffer cache to an extended cache and to an object store. The embodiment determines a transaction type of the database transaction. The transaction type can a read-only transaction or an update transaction. The embodiment determines a phase of the database transaction based on the determined transaction type. The phase can be an execution phase or a commit phase. The embodiment then applies a caching policy to the extended cache for the evicted pages based on the determined transaction type of the database transaction and the determined phase of the database transaction.Type: GrantFiled: December 20, 2022Date of Patent: December 19, 2023Assignee: SAP SEInventors: Sagar Shedge, Nishant Sharma, Nawab Alam, Mohammed Abouzour, Gunes Aluc, Anant Agarwal
-
Patent number: 11836093Abstract: a method and an apparatus for managing a cache for storing content by determining popularity of the content based on content requests received during a current time slot for the content; transmitting information about the popularity of the content to a time-to-live (TTL) controller and receiving, from the TTL controller, TTL values for each popularity level determined by the TTL controller based on the information about the popularity; and managing the content based on the TTL values for each popularity level are provided.Type: GrantFiled: December 21, 2021Date of Patent: December 5, 2023Assignee: Electronics and Telecommunications Research InstituteInventors: Chunglae Cho, Seungjae Shin, Seung Hyun Yoon, Hong Seok Jeon
-
Patent number: 11829257Abstract: Due to the threat of virus attacks and ransom ware, an apparatus and methods for protecting backup storage devices from malicious software virus attacks is explored. An independent backup storage system is connected to a primary storage server over an undiscoverable communications line. The backup storage system is a read-only backup storage system most of the time buffering the backup storage system from a virus or attack on the primary storage server. The backup storage system changes from a read-only backup storage system to a read/write backup storage system only during a backup window of time where data is backed up to the backup storage system. A snapshot of the backup data is maintained in the backup storage system and can be made available at numerous points of time in the past if the data primary storage server becomes corrupted.Type: GrantFiled: May 18, 2023Date of Patent: November 28, 2023Assignee: Spectra Logic CorporationInventors: David Lee Trachy, Joshua Daniel Carter
-
Patent number: 11822817Abstract: Method and apparatus for managing data in a storage device, such as a solid-state drive (SSD). In some embodiments, a data storage device includes a main non-volatile memory (NVM), a host command queue that lists pending host read and host write commands, and a write cache which temporarily stores write data sets pending transfer to the NVM responsive to execution of the associated host write commands in the host command queue. A collision prediction circuit predicts a rate of future collisions involving the cached write data sets. A storage manager directs storage of the write data sets to a first target location responsive to the rate of future collisions being at a first level, and directs storage of the write data sets to a different, second target location responsive to the rate of future collisions being at a different, second level.Type: GrantFiled: July 27, 2021Date of Patent: November 21, 2023Assignee: Seagate Technology LLCInventor: Christopher Smith
-
Patent number: 11822479Abstract: Techniques for performing cache operations are provided. The techniques include recording an indication that providing exclusive access of a first cache line to a first processor is deemed problematic; detecting speculative execution of a store instruction by the first processor to the first cache line; and in response to the detecting, refusing to provide exclusive access of the first cache line to the first processor, based on the indication.Type: GrantFiled: October 29, 2021Date of Patent: November 21, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Paul J. Moyer
-
Patent number: 11803593Abstract: A system for receiving and propagating efficient search updates includes one or more processors configured to receive, from a first external system via a network, a first entity change request to modify data in an entity associated with the first external system. The first entity change request is saved in an entity store. The received entity change request is pushed from the entity store to an event publisher for forwarding to a streaming service. The first entity change request is classified and forwarded, from the streaming service, to a search index database. The search index is then updated based on the classified entity change request.Type: GrantFiled: February 14, 2020Date of Patent: October 31, 2023Assignee: COUPANG CORP.Inventor: Seung Won Lee
-
Patent number: 11797230Abstract: In one example in accordance with the present disclosure, an electronic device is described. The example electronic device includes a NAND flash device to store a static data component of a variable. The example electronic device also includes a NOR flash device to store a dynamic data component of the variable. The electronic device further includes a controller to write the static data component of the variable to the NAND flash device. This controller is also to write the dynamic data component of the variable to the NOR flash device.Type: GrantFiled: December 14, 2021Date of Patent: October 24, 2023Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventors: Jeffrey Kevin Jeansonne, Khoa Huynh, Mason Andrew Gunyuzlu
-
Patent number: 11797446Abstract: A multi-purpose server cache directory in a computing environment is provided. One of a plurality of operation modes may be selectively enabled or disabled, by a cache directory, based on a computation phase, data type, and data pattern for caching data in a cache having a plurality of address tags in the cache directory greater than a number of data lines in a cache array.Type: GrantFiled: October 29, 2021Date of Patent: October 24, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bulent Abali, Alper Buyuktosunoglu, Brian Robert Prasky, Jang-Soo Lee, Deanna Postles Dunn Berger
-
Patent number: 11789661Abstract: A variety of applications can include apparatus and/or methods of operating the apparatus in which functionalities of a memory device of the apparatus can be extended by changing data flow behaviour associated with standard commands used between a host platform and the memory device. Such functionalities can include debug capabilities. In an embodiment, a standard write command and data using a standard protocol to write to a memory device is received in the memory device, where the data is setup information to enable an extension component in the memory device. An extension component includes instructions in the memory device to execute operations on components of the memory device. The memory device can execute operations of the enabled extension component in the memory device based on the setup information. Additional apparatus, systems, and methods are disclosed.Type: GrantFiled: May 18, 2022Date of Patent: October 17, 2023Assignee: Micron Technology, Inc.Inventors: Angelo Della Monica, Eric Kwok Fung Yuen, Pasquale Cimmino, Massimo Iaculo, Francesco Falanga
-
Patent number: 11789838Abstract: Disclosed are methods, systems, and computer-readable medium for preventing system crashes, including loading a resource from a real resource location; receiving a registration request from a resource user; registering the resource user by updating a resource owner registration list to indicate the resource user registration; receiving a first unload request and determining that the resource user is registered by accessing the registration list; upon determining that the resource user is registered, denying the first unload request; generating a stop use request; transmitting the stop use request to the resource user; receiving a deregistration request from the resource user, based on the stop use request; deregistering the resource user by updating the resource owner registration list; receiving a second unload request after deregistering the resource user; and approving the second unload request to unload the resource.Type: GrantFiled: January 31, 2022Date of Patent: October 17, 2023Assignee: MicroStrategy IncorporatedInventors: Yi Luo, Kaijie Yang, Xianting Lu, Sigit Pambudi
-
Patent number: 11782716Abstract: Systems, methods, and apparatuses relating to circuitry to implement individually revocable capabilities for enforcing temporal memory safety are described. In one embodiment, a hardware processor comprises an execution unit to execute an instruction to request access to a block of memory through a pointer to the block of memory, and a memory controller circuit to allow access to the block of memory when an allocated object tag in the pointer is validated with an allocated object tag in an entry of a capability table in memory that is indexed by an index value in the pointer, wherein the memory controller circuit is to clear the allocated object tag in the capability table when a corresponding object is deallocated.Type: GrantFiled: November 2, 2021Date of Patent: October 10, 2023Assignee: Intel CorporationInventors: Michael LeMay, Vedvyas Shanbhogue, Deepak Gupta, Ravi Sahita, David M. Durham, Willem Pinckaers, Enrico Perla
-
Patent number: 11778062Abstract: A system architecture can be used to facilitate communication among applications that are native and/or non-native to an application environment. The system architecture can include a first application environment executed on a client-side computing device. The first application environment can execute software applications that are native thereto. The first application environment can further execute software applications that are native thereto, but which software applications themselves comprise second application environments of types different from the first application environment, and which software applications can therefore execute additional software applications that are non-native to the first application environment. The first application environment can further execute a computation engine that is configured to store and execute instructions received from the first software application, the second software application, or both.Type: GrantFiled: July 11, 2022Date of Patent: October 3, 2023Assignee: Palantir Technologies Inc.Inventors: Peter Wilczynski, Christopher Hammett, Lloyd Ho, Sharon Hao
-
Patent number: 11749347Abstract: In certain aspects, a memory device includes an array of memory cells in columns and rows, word lines respectively coupled to rows, bit lines respectively coupled to the columns, and a peripheral circuit coupled to the array of memory cells through the bit lines and the word lines and configured to program a select row based on a current data page. Each memory cell is configured to store a piece of N-bits data at one of 2N levels, where N is an integer greater than 1. The peripheral circuit includes page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes one cache storage unit, one multipurpose storage unit, and N?1 data storage units.Type: GrantFiled: June 22, 2021Date of Patent: September 5, 2023Assignee: YANGTZE MEMORY TECHNOLOGIES CO., LTD.Inventor: Weijun Wan