Multiple Caches Patents (Class 711/119)
  • Patent number: 11971855
    Abstract: Methods, apparatus, and processor-readable storage media for supporting multiple operations in transaction logging for a cloud enabled file system are provided herein. An example computer-implemented method includes obtaining a plurality of file system operations to be performed on a cloud enabled file system; executing the plurality of file system operations as a single file system transaction; and maintaining a transaction log for the single transaction, the transaction log comprising information for one or more sub-transactions that were completed in conjunction with said executing, wherein the one or more sub-transactions correspond to at least a portion of the plurality of file system operations.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: April 30, 2024
    Assignee: EMC IP Holding Company LLC
    Inventor: Priyamrita Ghosh
  • Patent number: 11972034
    Abstract: A computer system and associated methods are disclosed for mitigating side-channel attacks using a shared cache. The computer system includes a host having a main memory and a shared cache. The host executes a virtual machine manager (VMM) that determines respective security keys for a plurality of co-located virtual machines (VMs). A cache controller for the shared cache includes a scrambling function that scrambles addresses of memory accesses performed by threads of the VMs according to the respective security keys. Different cache tiers may implement different scrambling functions optimized to the architecture of each cache tier. Security keys may be periodically updated to further reduce predictability of shared cache to memory address mappings.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: April 30, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Martin Pohlack, Pawel Wieczorkiewicz, Uwe Dannowski
  • Patent number: 11966398
    Abstract: A method for storing video data includes, when receiving the I-frame data to be stored, detecting whether the written data exists in the video cache space; when detecting that the written data exists in the video cache space, reading a target writing position of the I-frame data to be stored and determining whether the target writing position is located within a position range corresponding to the written data in the first cache space; when determining the target writing position is located within the position range, writing, based on the target writing position, the I-frame data to be stored to the first cache space for caching and detecting whether the first cache space is full; and when detecting that the first cache space is full, writing all the video data in the video cache space to a memory space of the terminal device for storage and emptying the video cache space.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: April 23, 2024
    Assignee: ZHEJIANG UNIVIEW TECHNOLOGIES CO., LTD.
    Inventors: Zuohua Wu, Qiang Ding
  • Patent number: 11966385
    Abstract: In various examples, there is provided a computer-implemented method for writing transaction log entries to a transaction log for a database system. At least part of the database system is configured to be executed within a trusted execution environment. The transaction log is stored outside of the trusted execution environment. The method maintains a first secure count representing a number of transaction log entries which have been written to the transaction log for transactions which have been committed to the database and writes a transaction log entry to the transaction log. In other examples, there is also provided is a computer-implemented method for restoring a database system using transaction log entries received from the transaction log and a current value of the first secure count.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: April 23, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Christian Priebe, Kapil Vaswani, Manuel Silverio da Silva Costa
  • Patent number: 11954022
    Abstract: Provided are a storage device, system, and method for throttling host writes in a host buffer to a storage device. The storage device is coupled to a host system having a host buffer that includes reads and writes to pages of the storage device. Garbage collection consolidates valid data from pages in the storage device to fewer pages. A determination is made as to whether a processing measurement at the storage device satisfies a threshold. A timer value is set to a positive value in response to determining that the processing measurement satisfies the threshold. The timer is started to run for the timer value. Writes from the host buffer are blocked while the timer is running. Writes remain in the host buffer while the timer is running. A write is accepted from the host buffer to process in response to expiration of the timer.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: April 9, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew S. Reuter, Timothy J. Fisher, Aaron Daniel Fry, Jenny L. Brown, John Carrington Cates, Austin Eberle
  • Patent number: 11954028
    Abstract: There is disclosed a method of storing an encoded block of data in memory comprising encoding a block of data elements and determining a memory location (26) at which the encoded block of data is to be stored. The memory location (26) at which the encoded block of data is stored is then indicated in a header (406) for the encoded block of data by including in the header a memory address value (407) together with a modifier value (500) representing a modifier that is to be applied to the memory address value (407) when determining the memory location (26). When the encoded block of data is to be retrieved, the header (406) is read and processed to determine the memory location (26).
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: April 9, 2024
    Assignee: Arm Limited
    Inventors: Edvard Fielding, Jian Wang, Jakob Axel Fries, Carmelo Giliberto
  • Patent number: 11947581
    Abstract: A plurality of personalized news feeds are generated from input feeds including digital content items based on a dynamic taxonomy data structure. Entities are extracted from the input feeds and relationship strengths are obtained for the extracted entities and the digital content items. The dynamic taxonomy data structure is updated with the extracted entities and entries for the digital content news items are included at the corresponding branches based on the relationship strengths. Attributes are obtained for the entities and those entities corresponding to the trending topics are identified. Personalized news feeds are generated including the digital content items listed under the entities. Digital content items are added or removed from the digital content feeds based on one or more entity attributes.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: April 2, 2024
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Srikanth G Rao, Tarun Singhal, Mathangi Sandilya, Issac Abraham Alummoottil, Raja Sekhar Velagapudi, Rahel James Kale, Ankur Garg, Jayaprakash Nooji Shekar, Omkar Sudhakar Deorukhkar, Veera Raghavan Valayaputhur
  • Patent number: 11940911
    Abstract: Techniques are provided for implementing a persistent key-value store for caching client data, journaling, and/or crash recovery. The persistent key-value store may be hosted as a primary cache that provides read and write access to key-value record pairs stored within the persistent key-value store. The key-value record pairs are stored within multiple chains in the persistent key-value store. Journaling is provided for the persistent key-value store such that incoming key-value record pairs are stored within active chains, and data within frozen chains is written in a distributed manner across distributed storage of a distributed cluster of nodes. If there is a failure within the distributed cluster of nodes, then the persistent key-value store may be reconstructed and used for crash recovery.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: March 26, 2024
    Assignee: NetApp, Inc.
    Inventors: Sudheer Kumar Vavilapalli, Asif Imtiyaz Pathan, Parag Sarfare, Nikhil Mattankot, Stephen Wu, Amit Borase
  • Patent number: 11934307
    Abstract: An apparatus and method are provided for receiving a request from a plurality of processing units, where multiple of those processing units have associated cache storage. A snoop unit is used to implement a cache coherency protocol when a request is received that identifies a cacheable memory address. The snoop unit has snoop filter storage comprising a plurality of snoop filter tables organized in a hierarchical arrangement. The snoop filter tables comprise a primary snoop filter table at a highest level in the hierarchy, and each snoop filter table at a lower level in the hierarchy forms a backup snoop filter table for an adjacent snoop filter table at a higher level in the hierarchy. Each snoop filter table is arranged as a multi-way set associative storage structure, and each backup snoop filter table has a different number of sets than are provided in the adjacent snoop filter table.
    Type: Grant
    Filed: January 18, 2021
    Date of Patent: March 19, 2024
    Assignee: Arm Limited
    Inventors: Joshua Randall, Jesse Garrett Beu
  • Patent number: 11937164
    Abstract: A method for processing a data packet at a node in a Bluetooth Mesh network, comprising: (a) determining a one-hop device cache list of the node, wherein the one-hop device cache list comprises an address of one or more one-hop nodes; (b) when the node sends a data packet, checking whether a destination address of the data packet is the same as an address stored in the one-hop device cache list; if yes, setting a TTL value of the data packet to 0 and sending the data packet; otherwise, setting the TTL value of the data packet to be greater than a specified TTL threshold, and sending the data packet; and (c) when the node forwards a data packet, checking whether the destination address of the data packet is the same as an address stored in the one-hop device cache list; if yes, setting the TTL value of the data packet to 1 and forwarding the data packet; otherwise, deducting the TTL value of the data packet by 1 and forwarding the data packet.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: March 19, 2024
    Assignee: ESPRESSIF SYSTEMS (SHANGHAI) CO., LTD.
    Inventors: Yizan Zhou, Swee Ann Teo
  • Patent number: 11928352
    Abstract: Systems and methods are described for performing persistent inflight tracking of operations (Ops) within a cross-site storage solution. According to one embodiment, a method comprises maintaining state information regarding a data synchronous replication status for a first storage object of a primary storage cluster and a second storage object of a secondary storage cluster. The state information facilitates automatic triggering of resynchronization for data replication between the first storage object and the second storage object. The method includes performing persistent inflight tracking of I/O operations with a first Op log of the primary storage cluster and a second Op log of the secondary storage cluster, establishing and comparing Op ranges for the first and second Op logs, and determining a relation between the Op range of the first Op log and the Op range of the second Op log to prevent divergence of Ops in the first and second Op logs and to support parallel split of the Ops.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: March 12, 2024
    Assignee: NetApp, Inc.
    Inventors: Krishna Murthy Chandraiah Setty Narasingarayanapeta, Preetham Shenoy, Divya Kathiresan, Rakesh Bhargava
  • Patent number: 11908546
    Abstract: A system includes a plurality of host processors and a plurality of HMC devices configured as a distributed shared memory for the host processors. An HMC device includes a plurality of integrated circuit memory die including at least a first memory die arranged on top of a second memory die and at least a portion of the memory of the memory die is mapped to include at least a portion of a memory coherence directory; and a logic base die including at least one memory controller configured to manage three-dimensional (3D) access to memory of the plurality of memory die by at least one second device, and logic circuitry configured to determine memory coherence state information for data stored in the memory of the plurality of memory die, communicate information regarding the access to memory, and include the memory coherence information in the communicated information.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: February 20, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Richard C Murphy
  • Patent number: 11907528
    Abstract: Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: February 20, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Kai Chirca, Daniel Wu, Matthew David Pierson
  • Patent number: 11899937
    Abstract: Systems and methods of a memory allocation buffer to reduce heap fragmentation. In one embodiment, the memory allocation buffer structures a memory arena dedicated to a target region that is one of a plurality of regions in a server in a database cluster such as an HBase cluster. The memory area has a chunk size (e.g., 2 MB) and an offset pointer. Data objects in write requests targeted to the region are received and inserted to the memory arena at a location specified by the offset pointer. When the memory arena is filled, a new one is allocated. When a MemStore of the target region is flushed, the entire memory arenas for the target region are freed up. This reduces heap fragmentation that is responsible for long and/or frequent garbage collection pauses.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: February 13, 2024
    Assignee: Cloudera, Inc.
    Inventor: Todd Lipcon
  • Patent number: 11892955
    Abstract: System and method for analyzing CXL flits at read bypass detection logic to identify bypass memory read requests and transmitting the identified bypass memory read requests over a read request bypass path directly to a transaction/application layer of the CXL memory controller, wherein the read request bypass path does not include an arbitration/multiplexing layer and a link layer of the CXL memory controller, thereby reducing the latency inherent in a CXL memory controller.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: February 6, 2024
    Assignee: Microchip Technology Inc.
    Inventors: Sanjay Goyal, Larrie Simon Carr, Patrick Bailey
  • Patent number: 11893062
    Abstract: Technologies described herein can be used for the bulk lazy loading of structured data from a database. A request can be received to initialize an application data structure (such as a structured data object, a hierarchical data structure, an object graph, etc.). The data structure can be analyzed to identify a plurality of child objects of the data structure. Database records associated with the plurality of child objects can then be identified. A loaded child record table can be inspected to determine which of the identified database records are not stored in a cache. A request can be generated, comprising one or more queries to retrieve the uncached subset of database records from the database. Once the uncached subset of records are received from the database, these records can be used, along with the cached subset of the identified database records, to initialize the plurality of child objects of the application data structure.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: February 6, 2024
    Assignee: SAP SE
    Inventors: Frank Emminghaus, Wendeng Li, Zhijie Ai
  • Patent number: 11888938
    Abstract: Systems and methods for optimizing distributed computing systems are disclosed, such as for processing raw data from data sources (e.g., structured, semi-structured, key-value paired, etc.) in applications of big data. A process for utilizing multiple processing cores for data processing can include receiving raw input data and a first portion of digested input data from a data source client through an input/output bus at a first processor core, receiving, from the first processor core, the raw input data and first portion of digested input data by a second processor core, digesting the received raw input data by the second processor core to create a second portion of digested input data, receiving the second portion of digested input data by the first processor core, and writing, by the first processor core, the first portion of digested input data and the second portion of digested input data to a storage medium.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: January 30, 2024
    Assignee: Elasticflash, Inc.
    Inventors: Darshan Bharatkumar Rawal, Pradeep Jnana Madhavarapu, Naoki Iwakami
  • Patent number: 11880350
    Abstract: Resource lock ownership identification is provided across processes and systems of a clustered computing environment by a method which includes saving by a process, based on a user acquiring a resource lock, a lock information record to a shared data structure of the clustered computing environment. The lock information record includes user identification data identifying the user-owner of the resource lock acquired on a system executing the process. The method also includes referencing, by another process, the lock information record of the shared data structure to ascertain the user identification data identifying the user-owner of the resource lock, and thereby facilitate processing within the clustered computing environment.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: January 23, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David Kenneth McKnight, Yichong Zhang, Dung Thi Tang, Onno Van den Troost
  • Patent number: 11876677
    Abstract: Some embodiments of the invention provide a method for WAN (wide area network) optimization for a WAN that connects multiple sites, each of which has at least one router. At a gateway router deployed to a public cloud, the method receives from at least two routers at least two sites, multiple data streams destined for a particular centralized datacenter. The method performs a WAN optimization operation to aggregate the multiple streams into one outbound stream that is WAN optimized for forwarding to the particular centralized datacenter. The method then forwards the WAN-optimized data stream to the particular centralized datacenter.
    Type: Grant
    Filed: December 6, 2022
    Date of Patent: January 16, 2024
    Assignee: VMware LLC
    Inventors: Igor Golikov, Aran Bergman, Lior Gal, Avishay Yanai, Israel Cidon, Alex Markuze, Eyal Zohar
  • Patent number: 11868613
    Abstract: A method includes defining a plurality of data storage policies, each of the plurality of data storage policies providing rules for storing data among a plurality of data storage locations, each of the plurality of data storage locations having a data storage cost and a data retrieval cost associated therewith; determining a baseline policy distribution among the plurality of data storage policies for an entity; receiving new data items corresponding to the entity; storing the new data items in the plurality of data storage locations using the plurality of data storage policies based on the baseline policy distribution; and determining, using the artificial intelligence engine, a selected one of the plurality of data storage policies to use in storing the new data items corresponding to the entity based on the data storage cost for each of the plurality of data storage locations, and the data retrieval cost for each of the plurality of storage locations.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: January 9, 2024
    Assignee: CHANGE HEALTHCARE HOLDINGS LLC
    Inventors: Philippe Raffy, Jean-Francois Pambrun, David Dubois, Ashish Kumar
  • Patent number: 11868254
    Abstract: An electronic device includes a cache, a memory, and a controller. The controller stores an epoch counter value in metadata for a location in the memory when a cache block evicted from the cache is stored in the location. The controller also controls how the cache block is retained in the cache based at least in part on the epoch counter value when the cache block is subsequently retrieved from the location and stored in the cache.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Nuwan Jayasena
  • Patent number: 11863623
    Abstract: Storage devices and systems are capable of dynamically managing QoS requirements associated with host applications via a management interface. The management interface may the enable storage devices to: (i) decide which data needs to be transferred back to the hosts, (ii) choose to skip portions of the data transferred back to the hosts to improve throughput and maintain low cost, and (iii) operate contention resolutions with host applications. Furthermore, storage devices and systems may achieve a virtual throughput that may be greater than its actual physical throughput. The management interface may also be operated at an application level, which advantageously allows the devices and systems the capabilities of managing contention resolutions of host applications, and managing (changing, observing, fetching, etc.) one or more QoS requirements for each host application.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: January 2, 2024
    Assignee: Western Digital Technologies, Inc.
    Inventors: Dinesh Kumar Agarwal, Amit Sharma
  • Patent number: 11855898
    Abstract: Methods, non-transitory computer readable media, network traffic management apparatuses, and network traffic management systems include inspecting a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: December 26, 2023
    Assignee: F5, Inc.
    Inventor: William Ross Baumann
  • Patent number: 11847057
    Abstract: Disclosed herein are system, method, and computer program product embodiments for utilizing an extended cache to access an object store efficiently. An embodiment operates by executing a database transaction, thereby causing pages to be written from a buffer cache to an extended cache and to an object store. The embodiment determines a transaction type of the database transaction. The transaction type can a read-only transaction or an update transaction. The embodiment determines a phase of the database transaction based on the determined transaction type. The phase can be an execution phase or a commit phase. The embodiment then applies a caching policy to the extended cache for the evicted pages based on the determined transaction type of the database transaction and the determined phase of the database transaction.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: December 19, 2023
    Assignee: SAP SE
    Inventors: Sagar Shedge, Nishant Sharma, Nawab Alam, Mohammed Abouzour, Gunes Aluc, Anant Agarwal
  • Patent number: 11836093
    Abstract: a method and an apparatus for managing a cache for storing content by determining popularity of the content based on content requests received during a current time slot for the content; transmitting information about the popularity of the content to a time-to-live (TTL) controller and receiving, from the TTL controller, TTL values for each popularity level determined by the TTL controller based on the information about the popularity; and managing the content based on the TTL values for each popularity level are provided.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 5, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Chunglae Cho, Seungjae Shin, Seung Hyun Yoon, Hong Seok Jeon
  • Patent number: 11829257
    Abstract: Due to the threat of virus attacks and ransom ware, an apparatus and methods for protecting backup storage devices from malicious software virus attacks is explored. An independent backup storage system is connected to a primary storage server over an undiscoverable communications line. The backup storage system is a read-only backup storage system most of the time buffering the backup storage system from a virus or attack on the primary storage server. The backup storage system changes from a read-only backup storage system to a read/write backup storage system only during a backup window of time where data is backed up to the backup storage system. A snapshot of the backup data is maintained in the backup storage system and can be made available at numerous points of time in the past if the data primary storage server becomes corrupted.
    Type: Grant
    Filed: May 18, 2023
    Date of Patent: November 28, 2023
    Assignee: Spectra Logic Corporation
    Inventors: David Lee Trachy, Joshua Daniel Carter
  • Patent number: 11822479
    Abstract: Techniques for performing cache operations are provided. The techniques include recording an indication that providing exclusive access of a first cache line to a first processor is deemed problematic; detecting speculative execution of a store instruction by the first processor to the first cache line; and in response to the detecting, refusing to provide exclusive access of the first cache line to the first processor, based on the indication.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: November 21, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Paul J. Moyer
  • Patent number: 11822817
    Abstract: Method and apparatus for managing data in a storage device, such as a solid-state drive (SSD). In some embodiments, a data storage device includes a main non-volatile memory (NVM), a host command queue that lists pending host read and host write commands, and a write cache which temporarily stores write data sets pending transfer to the NVM responsive to execution of the associated host write commands in the host command queue. A collision prediction circuit predicts a rate of future collisions involving the cached write data sets. A storage manager directs storage of the write data sets to a first target location responsive to the rate of future collisions being at a first level, and directs storage of the write data sets to a different, second target location responsive to the rate of future collisions being at a different, second level.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: November 21, 2023
    Assignee: Seagate Technology LLC
    Inventor: Christopher Smith
  • Patent number: 11803593
    Abstract: A system for receiving and propagating efficient search updates includes one or more processors configured to receive, from a first external system via a network, a first entity change request to modify data in an entity associated with the first external system. The first entity change request is saved in an entity store. The received entity change request is pushed from the entity store to an event publisher for forwarding to a streaming service. The first entity change request is classified and forwarded, from the streaming service, to a search index database. The search index is then updated based on the classified entity change request.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: October 31, 2023
    Assignee: COUPANG CORP.
    Inventor: Seung Won Lee
  • Patent number: 11797230
    Abstract: In one example in accordance with the present disclosure, an electronic device is described. The example electronic device includes a NAND flash device to store a static data component of a variable. The example electronic device also includes a NOR flash device to store a dynamic data component of the variable. The electronic device further includes a controller to write the static data component of the variable to the NAND flash device. This controller is also to write the dynamic data component of the variable to the NOR flash device.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: October 24, 2023
    Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Jeffrey Kevin Jeansonne, Khoa Huynh, Mason Andrew Gunyuzlu
  • Patent number: 11797446
    Abstract: A multi-purpose server cache directory in a computing environment is provided. One of a plurality of operation modes may be selectively enabled or disabled, by a cache directory, based on a computation phase, data type, and data pattern for caching data in a cache having a plurality of address tags in the cache directory greater than a number of data lines in a cache array.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: October 24, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Alper Buyuktosunoglu, Brian Robert Prasky, Jang-Soo Lee, Deanna Postles Dunn Berger
  • Patent number: 11789661
    Abstract: A variety of applications can include apparatus and/or methods of operating the apparatus in which functionalities of a memory device of the apparatus can be extended by changing data flow behaviour associated with standard commands used between a host platform and the memory device. Such functionalities can include debug capabilities. In an embodiment, a standard write command and data using a standard protocol to write to a memory device is received in the memory device, where the data is setup information to enable an extension component in the memory device. An extension component includes instructions in the memory device to execute operations on components of the memory device. The memory device can execute operations of the enabled extension component in the memory device based on the setup information. Additional apparatus, systems, and methods are disclosed.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: October 17, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Angelo Della Monica, Eric Kwok Fung Yuen, Pasquale Cimmino, Massimo Iaculo, Francesco Falanga
  • Patent number: 11789838
    Abstract: Disclosed are methods, systems, and computer-readable medium for preventing system crashes, including loading a resource from a real resource location; receiving a registration request from a resource user; registering the resource user by updating a resource owner registration list to indicate the resource user registration; receiving a first unload request and determining that the resource user is registered by accessing the registration list; upon determining that the resource user is registered, denying the first unload request; generating a stop use request; transmitting the stop use request to the resource user; receiving a deregistration request from the resource user, based on the stop use request; deregistering the resource user by updating the resource owner registration list; receiving a second unload request after deregistering the resource user; and approving the second unload request to unload the resource.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: October 17, 2023
    Assignee: MicroStrategy Incorporated
    Inventors: Yi Luo, Kaijie Yang, Xianting Lu, Sigit Pambudi
  • Patent number: 11782716
    Abstract: Systems, methods, and apparatuses relating to circuitry to implement individually revocable capabilities for enforcing temporal memory safety are described. In one embodiment, a hardware processor comprises an execution unit to execute an instruction to request access to a block of memory through a pointer to the block of memory, and a memory controller circuit to allow access to the block of memory when an allocated object tag in the pointer is validated with an allocated object tag in an entry of a capability table in memory that is indexed by an index value in the pointer, wherein the memory controller circuit is to clear the allocated object tag in the capability table when a corresponding object is deallocated.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: October 10, 2023
    Assignee: Intel Corporation
    Inventors: Michael LeMay, Vedvyas Shanbhogue, Deepak Gupta, Ravi Sahita, David M. Durham, Willem Pinckaers, Enrico Perla
  • Patent number: 11778062
    Abstract: A system architecture can be used to facilitate communication among applications that are native and/or non-native to an application environment. The system architecture can include a first application environment executed on a client-side computing device. The first application environment can execute software applications that are native thereto. The first application environment can further execute software applications that are native thereto, but which software applications themselves comprise second application environments of types different from the first application environment, and which software applications can therefore execute additional software applications that are non-native to the first application environment. The first application environment can further execute a computation engine that is configured to store and execute instructions received from the first software application, the second software application, or both.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: October 3, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Peter Wilczynski, Christopher Hammett, Lloyd Ho, Sharon Hao
  • Patent number: 11749347
    Abstract: In certain aspects, a memory device includes an array of memory cells in columns and rows, word lines respectively coupled to rows, bit lines respectively coupled to the columns, and a peripheral circuit coupled to the array of memory cells through the bit lines and the word lines and configured to program a select row based on a current data page. Each memory cell is configured to store a piece of N-bits data at one of 2N levels, where N is an integer greater than 1. The peripheral circuit includes page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes one cache storage unit, one multipurpose storage unit, and N?1 data storage units.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: September 5, 2023
    Assignee: YANGTZE MEMORY TECHNOLOGIES CO., LTD.
    Inventor: Weijun Wan
  • Patent number: 11734177
    Abstract: A memory interface for interfacing between a memory bus and a cache memory, comprising: a plurality of bus interfaces configured to transfer data between the memory bus and the cache memory; and a plurality of snoop processors configured to receive snoop requests from the memory bus; wherein each snoop processor is associated with a respective bus interface and each snoop processor is configured, on receiving a snoop request, to determine whether the snoop request relates to the bus interface associated with that snoop processor and to process the snoop request in dependence on that determination.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: August 22, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Martin John Robinson, Mark Landers
  • Patent number: 11726916
    Abstract: A method, computer program product, and computing system for defining a normal IO write mode for writing data to a storage system including: writing the data to a cache memory system of a first storage node, writing the data to a journal of the first storage node, sending a notification concerning the data to a second storage node, writing one or more metadata entries concerning the data to a journal of the second storage node, sending an acknowledgment signal to the host device, and writing the data to the storage array. A request may be received to enter a testing IO write mode. In response to receiving the request, the data may be written to the cache memory system. The writing of the data to the journal may be bypassed. The acknowledgment signal may be sent to the host device. The data may be written to the storage array.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: August 15, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Geng Han, Vladimir Shveidel, Uri Shabi
  • Patent number: 11726913
    Abstract: Provided are a computer program product, system, and method for using track status information on active or inactive status of track to determine whether to process a host request on a fast access channel. A host request to access a target track is received on a first channel to the host. A determination is made as to whether the target track has active or inactive status. The target track has active status when at least one process currently maintains a lock on the target track that prevents access and the target track has inactive status when no process maintains a lock on the target track that prevents access. Fail is returned to the host to cause the host to resend the host request on a second channel in response to the target track having the active status. The first channel has lower latency than the second channel.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: August 15, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh Mohan Gupta, Beth Ann Peterson, Matthew G. Borlick
  • Patent number: 11720498
    Abstract: An arithmetic processing device including: request issuing units configured to issue an access request to a storage; and banks each of which includes: a first cache area including first entries; a second cache area including second entries; a control unit; and a determination unit that determines a cache hit or a cache miss for each of the banks, wherein the control unit performs: in response that the access requests simultaneously received from the request issuing units make the cache miss, storing the data, which is read from the storage device respectively by the access requests, in one of the first entries and one of the second entries; and in response that the access requests simultaneously received from the request issuing units make the cache hit in the first and second cache areas, outputting the data retained in the first and second entries, to each of issuers of the access requests.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: August 8, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Katsuhiro Yoda
  • Patent number: 11704251
    Abstract: The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: July 18, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Sangho Youn, Steven Karl Reinhardt, Hui Geng
  • Patent number: 11687459
    Abstract: Example implementations relate to cache coherency protocols as applied to a memory block range. Exclusive ownership of a range of blocks of memory in a default shared state may be tracked by a directory. The directory may be associated with a first processor of a set of processors. When a request is received from a second processor of the set of processors to read one or more blocks of memory absent from the directory, one or more blocks may be transmitted in the default shared state to the second processor. The blocks absent from the directory may not be tracked in the directory.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: June 27, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Michael Malewicki, Thomas McGee, Michael S. Woodacre
  • Patent number: 11687417
    Abstract: Due to the threat of virus attacks and ransom ware, an apparatus and methods for protecting backup storage devices from malicious software virus attacks is explored. An independent backup storage system is connected to a primary storage server over an undiscoverable communications line. The backup storage system is a read-only backup storage system most of the time buffering the backup storage system from a virus or attack on the primary storage server. The backup storage system changes from a read-only backup storage system to a read/write backup storage system only during a backup window of time where data is backed up to the backup storage system. A snapshot of the backup data is maintained in the backup storage system and can be made available at numerous points of time in the past if the data primary storage server becomes corrupted.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: June 27, 2023
    Assignee: Spectra Logic Corporation
    Inventors: David Lee Trachy, Joshua Daniel Carter
  • Patent number: 11683394
    Abstract: Systems and methods for isolating applications associated with multiple tenants within a computing platform receive a request from a client associated with a tenant for running an application on a computing platform. Hosts connected to the platform are associated with a network address and configured to run applications associated with multiple tenants. A host is identified based at least in part on the request. One or more broadcast domain(s) including the identified hosts are generated. The broadcast domains are isolated in the network at a data link layer. A unique tenant identification number corresponding to the tenant is assigned to the broadcast domains. In response to launching the application on the host: the unique tenant identification number is assigned to the launched application and is added to the network address of the host; and the network address of the host is sent to the client associated with the tenant.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: June 20, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Greg DeArment, Divyanshu Arora, Jason Hoch, Mark Elliot, Matthew Williamson, Robert Kruszewski, Steven Austin
  • Patent number: 11677624
    Abstract: An indication that a client system has connected to a server system that is associated with a network file system may be received. In response to the indication that the client system has connected to the server system, a number of client systems that are connected to the server system may be determined. The network file system may be configured in view of the determined number of client systems that are connected to the server system. Access to the network file system may be provided to the client system in response to configuring the network file system in view of the determined number of client systems that are connected to the server system.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: June 13, 2023
    Assignee: Red Hat, Inc.
    Inventors: Poornima Gurusiddaiah, Amar Tumballi Suryanarayan
  • Patent number: 11675703
    Abstract: A processing system includes an interconnect fabric coupleable to a local memory and at least one compute cluster coupled to the interconnect fabric. The compute cluster includes a processor core and a cache hierarchy. The cache hierarchy has a plurality of caches and a throttle controller configured to throttle a rate of memory requests issuable by the processor core based on at least one of an access latency metric and a prefetch accuracy metric. The access latency metric represents an average access latency for memory requests for the processor core and the prefetch accuracy metric represents an accuracy of a prefetcher of a cache of the cache hierarchy.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: June 13, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: William L. Walker, William E. Jones
  • Patent number: 11669471
    Abstract: A method, computer program product, and computing system for receiving an input/output (IO) command for processing data within a storage system. An IO command-specific entry may be generated in a register based upon, at least in part, the IO command. An compare-and-swap operation may be performed on the IO command-specific entry to determine an IO command state associated with the IO command. The IO command may be processed based upon, at least in part, the IO command state associated with the IO command.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: June 6, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Eldad Zinger, Ran Anner, Amit Engel
  • Patent number: 11663207
    Abstract: Systems, devices, and techniques are disclosed for translation of tenant identifiers. A record may be received. A value of a tenant identifier for the record may be determined from a key for the record or a scan descriptor. The value of the tenant identifier in the key for the record may be replaced with a new value for the tenant identifier. A bitmap stored in a record header of the record may be used to identify columns of the record that stored an encoded value of the tenant identifier. An encoded new value of the tenant identifier may be stored in columns identified by the bitmap stored in the record header that include an attribute indicating that tenant identifier translation is enabled.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: May 30, 2023
    Assignee: Salesforce, Inc.
    Inventor: Thomas Fanghaenel
  • Patent number: 11656995
    Abstract: A method comprising receiving a memory access request comprising an address of data to be accessed and determining an access granularity of the data to be accessed based on the address of the data to be accessed. The method further includes, in response to determining that the data to be accessed has a first access granularity, generating first cache line metadata associated with the first access granularity and in response to determining that the data to be accessed has a second access granularity, generating second cache line metadata associated with the second access granularity. The method further includes storing the first cache line metadata and the second cache line metadata in a single cache memory component.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Dhawal Bavishi, Robert M. Walker
  • Patent number: 11645390
    Abstract: A next generation antivirus (NGAV) security solution in a virtualized computing environment includes a security sensor at a virtual machine that runs on a host and a security engine remote from the host. The integrity of the NGAV security solution is increased, by providing a verification as to whether a verdict issued by the security engine has been successfully enforced by the security sensor to prevent execution of malicious code at the virtual machine.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: May 9, 2023
    Assignee: VMWARE, INC.
    Inventors: Shirish Vijayvargiya, Vasantha Kumar Dhanasekar, Sachin Shinde, Rayanagouda Bheemanagouda Patil