Based On Data Size Patents (Class 711/171)
  • Patent number: 12260918
    Abstract: A memory device includes: a memory cell array including a security region configured to store security data; and a security management circuit configured to store a guard key and, responsive to receiving a data operation command for the security region, limit a data operation for the security region by comparing the guard key with an input password that is received by the memory device.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: March 25, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yoo-jung Lee, Jang-seok Choi, Duk-sung Kim, Hyun-joong Kim
  • Patent number: 12229595
    Abstract: A method for allocating on-chip memory of a neural processing unit is performed by one or more processors, and includes deallocating an allocated chunk in an on-chip memory area, which is finished with the use of the memory, and converting it into a cached chunk, receiving an on-chip memory allocation request for specific data, determining whether there is a cached chunk of one or more cached chunks that is allocable for the specific data, based on a comparison between a size of the specific data and the size of the one or more cached chunks, and based on a result of determining whether there is the cached chunk that is allocable for the specific data, allocating the specific data to a specific cached chunk of the one or more cached chunks, or allocating the specific data to at least a portion of the free chunk.
    Type: Grant
    Filed: May 23, 2024
    Date of Patent: February 18, 2025
    Assignee: REBELLIONS INC.
    Inventor: Minhoo Kang
  • Patent number: 12229420
    Abstract: A compression-expansion control apparatus has a reconfiguration portion capable of configuring one or more compression circuits which compress data in plain text and/or one or more expansion circuits which expand the compressed data on a programmable logical circuit component, a waiting-time observing portion which observes processing waiting-time from when compression processing was requested till when the compression processing is started and processing waiting-time from when expansion processing was requested till when the expansion processing is started, a calculating portion which determines the number or a ratio of the compression circuits and the expansion circuits in the reconfiguration portion on the basis of the processing waiting-time of the compression processing and the processing waiting-time of the expansion processing, and a switching portion which executes reconfiguration of the compression circuit and/or the expansion circuit in the reconfiguration portion on the basis of the number or the r
    Type: Grant
    Filed: March 9, 2023
    Date of Patent: February 18, 2025
    Assignee: HITACHI VANTARA, LTD.
    Inventors: Tomoyuki Kamazuka, Yuusaku Kiyota
  • Patent number: 12229009
    Abstract: Techniques for performing processing to recover metadata may include: shadow top structures, and performing processing that uses the shadow top structures to recover information for an index node associated with an object of a file system having a file system logical address space. One of the shadow top structures is created for each metadata (MD) top node of a MD mapping structure used to determine storage locations of data stored at corresponding logical addresses in the file system logical address space. Each MD top node is used in determining storage locations for a specified subrange of logical addresses of the file system logical address space. Each shadow top structure corresponding to a MD top node describes each file system object mapped to a logical address included in the specified subrange of logical addresses of the file system address space associated with the corresponding MD top node.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: February 18, 2025
    Assignee: EMC IP Holding Company LLC
    Inventors: Rohit K. Chawla, William C. Davenport
  • Patent number: 12192458
    Abstract: A method for decoding a picture from a bitstream. The picture comprising a number of units, and the picture being partitioned into a number of spatial segments by a partition structure. The method includes decoding one or more code words in the bitstream; determining that the partition structure is uniform based on the one or more code words; determining the number of spatial segments based on the one or more code words; determining a segment unit size; and deriving the sizes and/or locations for spatial segments in the picture from the one or more code words. Deriving the sizes and/or locations for spatial segments in the picture comprises a first loop over the number of spatial segments in a first dimension or direction. A number of remaining segment units in the first dimension or direction to be segmented is calculated inside the first loop.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: January 7, 2025
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Mitra Damghanian, Martin Pettersson, Rickard Sjöberg
  • Patent number: 12182175
    Abstract: An information handling system acquires data chunks for a duration of at least one time slice, determines an overwrite frequency for the duration of the time slice of each of the data chunks, clusters the data chunks according to the overwrite frequency, and determines an overwrite frequency label for each cluster of the data chunks. The system may also determine a read frequency for the duration of the time slice of each of the data chunks, cluster the data chunks based on the read frequency, and determine a read frequency label for each of the cluster of the data chunks. The system may also construct a sorted tree based on the overwrite frequency label, the read frequency label, and a virtual logical block address of each of the data chunks.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: December 31, 2024
    Assignee: Dell Products L.P.
    Inventors: Weilan Pu, Jie Wang, Jian Kang
  • Patent number: 12153529
    Abstract: A memory system includes a memory resource and a smart controller. The memory resource includes semiconductor memory devices, the semiconductor memory devices are divided into a first semiconductor memory and a second semiconductor memory for each of a plurality of channels, and the first semiconductor memory the second semiconductor memory belong to different ranks. The smart controller, connected to the semiconductor memory devices through the channels, controls the semiconductor memory devices by communicating a plurality of hosts through a compute express link (CXL) interface, and each of the plurality of hosts drives at least one virtual machine. The smart controller controls a power mode of the memory resource by managing an idle memory region from among a plurality of memory regions of the plurality of semiconductor memory devices at a rank level without intervention of the plurality of hosts, the plurality of memory regions storing data.
    Type: Grant
    Filed: February 16, 2023
    Date of Patent: November 26, 2024
    Assignees: SAMSUNG ELECTRONICS CO., LTD., INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITY
    Inventors: Kiseok Oh, Jaewook Lee, Wenjing Jin, Jongsung Lee, Juyun Jung
  • Patent number: 12135892
    Abstract: A processor of each node of a storage system executes rebalancing to transfer a volume between pool volumes for the node in such a way as to equalize throughput of a data input/output process and/or data capacities between the pool volumes making up the pool. When a plurality of external volumes are transferred to a node in the storage system, the storage management apparatus determines a transfer destination node for the volumes and order of transferring the volumes and executes volume transfer in such a way as to equalize throughput of the data input/output process and/or data capacities between the pool volumes making up the pool after completion of or during the volume transfer.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: November 5, 2024
    Assignee: Hitachi, Ltd.
    Inventors: Takanobu Suzuki, Tsukasa Shibayama, Akira Deguchi
  • Patent number: 12131020
    Abstract: Memory devices are disclosed. A memory device may include dynamic cache, static cache, and a memory controller. The memory controller may be configured to disable the static cache responsive to a number of program/erase (PE) cycles consumed by the static cache being greater than an endurance of the static cache. The memory controller may also be configured to disable the dynamic cache responsive to a number of PE cycles consumed by the dynamic cache being greater than an endurance of the dynamic cache. Associated methods and systems are also disclosed.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: October 29, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Kishore K. Muchherla, Ashutosh Malshe, Sampath K. Ratnam, Peter Feeley, Michael G. Miller, Christopher S. Hale, Renato C. Padilla
  • Patent number: 12130750
    Abstract: Computer systems often employ virtual address translation hierarchies in which virtual memory addresses are mapped to physical memory. Use of the virtual address translation hierarchy speeds up the virtual address translation when the required mapping is stored in one of the higher levels of the hierarchy. To reduce a number of misses occurring in the virtual address translation hierarchy, huge memory pages may be selectively employed, which map larger continuous regions of virtual memory to continuous regions of physical memory, thereby increasing the coverage of each entry in the virtual address translation hierarchy. The present disclosure provides hardware support for optimizing this huge memory page selection.
    Type: Grant
    Filed: March 6, 2023
    Date of Patent: October 29, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Aninda Manocha, Zi Yan, David Nellans
  • Patent number: 12131506
    Abstract: Aspects of the disclosure include methods, apparatuses, and non-transitory computer-readable storage mediums for point cloud compression. An apparatus includes processing circuitry that encodes information associated with a current point of a plurality of points of a point cloud. The plurality of points is partitioned into multiple bounding boxes. The processing circuitry determines whether a first size of a hash table is greater than or equal to a predetermined maximum size of the hash table. The processing circuitry removes information associated with non-boundary points in the multiple bounding boxes from the hash table based on the first size of the hash table being greater than or equal to the predetermined maximum size of the hash table. The processing circuitry stores the encoded information associated with the current point into the hash table.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: October 29, 2024
    Assignee: Tencent America LLC
    Inventors: Xiang Zhang, Wen Gao, Shan Liu
  • Patent number: 12118245
    Abstract: The techniques disclosed herein enable systems to efficiently interface with zoned namespace (ZNS) storage devices through a specialized management entity. To achieve this, the management entity receives write requests from a file system containing data intended for storage at the ZNS device. In response, the management entity selects a zone from the ZNS device to write the file data to. Accordingly, the file data is written by appending the file data to the zone at a location indicated by a write pointer. When the write operation is completed, the offset of the file data within the zone is observed and recorded by the file system in file metadata. In contrast to typical systems which allocate locations at the storage device prior to writing, appending file data and then recording the location enables improved efficiency in file system operations. Namely, that write operations can be issued to the ZNS device non-serially.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: October 15, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Rajsekhar Das, Neeraj Kumar Singh
  • Patent number: 12105628
    Abstract: Disclosed herein are an apparatus and method for managing cache memory. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors. The at least one program reads an s1-tag and an s2-tag of cache memory upon receiving an access request address for reading data in response to a request to access the cache memory, checks whether the access request address matches the value of the s1-tag and the value of the s2-tag, and reads the data from data memory when the access request address matches all of the value of the s1-tag and the value of the s2-tag.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: October 1, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Hyun-Mi Kim
  • Patent number: 12099725
    Abstract: A method includes determining a logical saturation of a memory device in a memory sub-system and adjusting a code rate of the memory device based on the logical saturation, wherein the code rate represents a ratio of user data to a combination of the user data and error correction data.
    Type: Grant
    Filed: August 17, 2022
    Date of Patent: September 24, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Kishore Kumar Muchherla, Mustafa N. Kaynak, Jonathan S. Parry, Sivagnanam Parthasarathy, Akira Goda
  • Patent number: 12099745
    Abstract: A data storage device includes a memory device and a memory controller. The memory device has a corresponding total storage capacity and includes multiple memory blocks. The total storage capacity is set to a maximum storage capacity provided by the memory blocks by default. The memory blocks include one or more predetermined memory blocks configured as a buffer to receive data from a host device. The memory controller is coupled to the memory device to access the memory device. In response to setting of a maximum amount of write data, the memory controller determines a value of the total storage capacity according to the maximum amount of write data, and determines a number of said one or more predetermined memory blocks according to the value of the total storage capacity and the maximum storage capacity.
    Type: Grant
    Filed: July 4, 2023
    Date of Patent: September 24, 2024
    Assignee: Silicon Motion, Inc.
    Inventor: Po-Wei Wu
  • Patent number: 12086430
    Abstract: This application discloses a mirrored memory configuration method and apparatus, and a computer storage medium, and belongs to the field of information processing technologies. The method includes the following: After a computer apparatus is started, if the computer apparatus is currently in an OS state and obtains a mirrored memory establishment request, the computer apparatus may switch from the OS state to a BIOS state through system interruption. Then the computer apparatus configures a mirroring relationship in the BIOS state, and switches to the OS state again after configuring the mirroring relationship, to reconfigure a mirrored memory.
    Type: Grant
    Filed: July 11, 2023
    Date of Patent: September 10, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Gang Liu, Fei Zhang
  • Patent number: 12072796
    Abstract: According to one embodiment, a computing system transmits to a storage device a write request designating a first logical address for identifying first data to be written and a length of the first data. The computing system receives from the storage device the first logical address and a first physical address indicative of both of a first block selected from blocks except a defective block by the storage device, and a first physical storage location in the first block to which the first data is written. The computing system updates a first table which manages mapping between logical addresses and physical addresses of the storage device and maps the first physical address to the first logical address.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: August 27, 2024
    Assignee: Kioxia Corporation
    Inventor: Shinichi Kanno
  • Patent number: 12061608
    Abstract: Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising processing a plurality of events using a stream processor and executing a deduplication process on the plurality of events using the stream processor. The plurality of events is outputted to a streaming queue and a close of books (COB) of a data transport is detected. Then, an audit process is initiated in response to detecting the COB signal, the audit process comprising comparing a set of raw events to a set of events in the streaming queue to identify a set of missing events, and replaying a set of missing events through the stream processor.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 13, 2024
    Assignee: YAHOO ASSETS LLC
    Inventors: Michael Pippin, David Willcox, Allie K. Watfa, George Aleksandrovich
  • Patent number: 12039170
    Abstract: A hardware revocation engine for invalidating a pointer, that refers to a deallocated object, from memory in a memory constrained system. The hardware revocation engine has a revocation pipeline coupled to a pipeline of a main processor of the memory constrained system. The revocation pipeline shares access to memory with the main pipeline, the revocation pipeline comprising at least a first stage and a subsequent second stage. In a first cycle of the revocation pipeline, the first stage of the revocation pipeline loads a first pointer-sized value from the memory. In a second cycle: the second stage checks whether the first loaded pointer-sized value is a pointer referring to deallocated memory. In a third cycle: in response to the outcome of the check indicating that the first loaded pointer-sized value is a pointer referring to deallocated memory, the first stage invalidates the first pointer-sized value.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: July 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Thomas Chisnall, Hongyan Xia, Nathaniel Wesley Filardo, Robert McNeill Norton-Wright
  • Patent number: 12041030
    Abstract: A distributed memory data repository of connected data centres. The network load balances by routing requests to different data centres for processing. The solution design provides a blue print to implement a distributed memory data repository based defense system across multiple nodes with dynamic fail-over capabilities. The defense system runs independently on a single node, exclusively leveraging memory for data storage and implementing a communication channel to interact with other nodes.
    Type: Grant
    Filed: April 20, 2022
    Date of Patent: July 16, 2024
    Assignee: ROYAL BANK OF CANADA
    Inventor: Stéphane Harvey
  • Patent number: 12026552
    Abstract: A method for allocating on-chip memory of a neural processing unit is performed by one or more processors, and includes deallocating an allocated chunk in an on-chip memory area, which is finished with the use of the memory, and converting it into a cached chunk, receiving an on-chip memory allocation request for specific data, determining whether there is a cached chunk of one or more cached chunks that is allocable for the specific data, based on a comparison between a size of the specific data and the size of the one or more cached chunks, and based on a result of determining whether there is the cached chunk that is allocable for the specific data, allocating the specific data to a specific cached chunk of the one or more cached chunks, or allocating the specific data to at least a portion of the free chunk.
    Type: Grant
    Filed: December 19, 2023
    Date of Patent: July 2, 2024
    Assignee: REBELLIONS INC.
    Inventor: Minhoo Kang
  • Patent number: 11989210
    Abstract: A device may identify unique segments within data objects, of an object corpus stored in a data structure, as elements, and may generate an embedding space based on unique elements and mappings of the data objects to embeddings. The device may estimate semantic proximities among the data objects based on the mappings, and may build a semantic cohesion network among the data objects based on the semantic proximities. The device may identify semantically cohesive data clusters in the semantic cohesion network, and may sort the data objects in the semantically cohesive data clusters. The device may determine, from the semantically cohesive and sorted data clusters, a home data cluster for a new data object, and may store bookkeeping details of the new data object in the data structure based on the new data object being semantically similar to the data object in the home data cluster.
    Type: Grant
    Filed: July 12, 2022
    Date of Patent: May 21, 2024
    Assignee: Accenture Global Solutions Limited
    Inventors: Janardan Misra, Naveen Gordhan Balani
  • Patent number: 11966584
    Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for managing a storage device. The method includes: determining, based on the frequency of data access to the storage device, whether a data access component of the storage device will move; determining, if it is determined that the data access component will move, a first storage unit in the storage device based on a storage location of previously accessed data in the storage device, wherein the data access component is located at a first spatial location corresponding to the first storage unit; and sending a read request for data in a second storage unit in the storage device that is adjacent to the first storage unit, so as to cause the data access component to move from the first spatial location to a second spatial location corresponding to the second storage unit. The embodiments of the present disclosure can reduce the latency of data access to the storage device.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: April 23, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Bing Liu, Zheng Li
  • Patent number: 11922016
    Abstract: Disclosed is a compressed memory management method for a computer system having one or more processors (P1-PN), compressible main memory, secondary memory and an operating system. The compressible main memory has a compressed memory space comprising an active part directly accessible to said one or more processors (P1-PN), as well as an inactive part not directly accessible to said one or more processors (P1-PN) in the form of memory freed up by memory compression.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: March 5, 2024
    Assignee: ZEROPOINT TECHNOLOGIES AB
    Inventors: Chloe Alverti, Angelos Arelakis, Ioannis Nikolakopoulos, Per Stenström, Pedro Petersen Moura Trancoso
  • Patent number: 11916781
    Abstract: A network interface controller (NIC) capable of efficiently utilizing an output buffer is provided. The NIC can be equipped with an output buffer, a host interface, an injector logic block, and an allocation logic block. The output buffer can include a plurality of cells, each of which can be a unit of storage in the output buffer. If the host interface receives a command from a host device, the injector logic block can generate a packet based on the command. The allocation logic block can then determine whether the packet is a multi-cell packet. If the packet is a multi-cell packet, the allocation logic block can determine a virtual index for the packet. The allocation logic block can then store, in an entry in a data structure, the virtual index, and a set of physical indices of cells storing the packet.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: February 27, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Partha Pratim Kundu, David Charles Hewson
  • Patent number: 11907588
    Abstract: Aspects of the invention include identifying a first subsystem and a second subsystem of a plurality of subsystems respectively storing a first compressed data and a second compressed data, wherein the first compressed data and the second compressed data are fragments of a requested data. A compression method used to compress the first compressed data and second compressed data is identified. A first accelerator of first subsystem and a second accelerator of the second subsystem is identified. The first compressed data from a first local memory of the first subsystem is offloaded to the first accelerator, and the second compressed data from a second local memory of the second subsystem is offloaded to the second accelerator, wherein offloading comprises provided a decompression method for the first compressed data and the second compressed data.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Vishnupriya R, Mehulkumar J. Patel, Manish Mukul
  • Patent number: 11874747
    Abstract: A method and system for stream optimized backups to a cloud object store. When considering data protection, many prominent applications engage in backup operations by streaming their respective data to the cloud; however, the stream(s) is/are often ill-optimized (e.g., non-uniform data rates, non-uniform block sizes, different backup types, non-uniform data types or formats, etc.) to be written into cloud storage. The disclosed method and system, accordingly, propose a dynamic framework through which any arbitrary backup stream may be optimized according to the profile of any specific cloud-based object data store.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: January 16, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Sunil Yadav, Amarendra Behera, Shelesh Chopra
  • Patent number: 11868369
    Abstract: Example resource management systems and methods are described. In one implementation, a resource manager is configured to manage data processing tasks associated with multiple data elements. An execution platform is coupled to the resource manager and includes multiple execution nodes configured to store data retrieved from multiple remote storage devices. Each execution node includes a cache and a processor, where the cache and processor are independent of the remote storage devices. A metadata manager is configured to access metadata associated with at least a portion of the multiple data elements.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: January 9, 2024
    Assignee: Snowflake Inc.
    Inventors: Thierry Cruanes, Benoit Dageville, Marcin Zukowski
  • Patent number: 11847507
    Abstract: Two or more semaphores can be used per queue for synchronization of direct memory access (DMA) transfers between a DMA engine and various computational engines by alternating the semaphores across sequential sets of consecutive DMA transfers in the queue. The DMA engine can increment a first semaphore after performing each DMA transfer of a first set of consecutive DMA transfers and a second semaphore after performing each DMA transfer of a second set of consecutive DMA transfers that is after the first set of consecutive DMA transfers in the queue. Each semaphore can be reset when all the computational engines that are dependent on the respective set of consecutive DMA transfers are done waiting on the given semaphore before performing respective operations. After reset, the first semaphore or the second semaphore can be reused for the next set of consecutive DMA transfers in the queue.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: December 19, 2023
    Assignee: Amazon Technologies, Inc.
    Inventor: Drazen Borkovic
  • Patent number: 11810221
    Abstract: A device that can at least store a captured image and is attached/detached to/from an image capturing apparatus which includes a mounting part to/from which the device can be attached/detached is provided. The device has functions of obtaining image data related to an image captured by the image capturing apparatus, executing analysis processing on the image data, storing the image data and a result of the analysis processing on the image data. The device executes control not to store the first result in a case in which a first result of the analysis processing is the same as a second result of previously executed analysis processing that is stored, and to store the first result in a case in which the first result is different from the second result.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: November 7, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tomoya Honjo
  • Patent number: 11803486
    Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing write-memory commands that are not cached in the first sub-cache, the second sub-cache including privilege bits configured to store an indication that a corresponding cache line of the second sub-cache is associated with a level of privilege, and wherein the second sub-cache is further configured to receive a first write memory command for a memory address associated with a first level of privilege, store, in the second sub-cache, first data associated with the first write memory command and the level of privilege associated with the cache line, receive a second write memory command for the cache line, the second write memory command associated with a second level of privilege, merge the first level of privilege with the second level of privilege, and output the merged privilege level with the cache line.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: October 31, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 11797234
    Abstract: A system includes a cluster of nodes including a storage domain, a memory, and a processor. The processor is configured to receive a request to determine an amount of allocated blocks associated with a virtual disk comprising a first volume. Each volume that includes metadata associated with allocated blocks is designated into a first set. Each volume within the one or more layers that lacks metadata associated with allocated blocks and includes an allocation table is designated into a second set. Each volume within the one or more layers that is omitted from the first set and second set is designated into a third set. The amount of allocated blocks within the first volume is determined based on inspecting the metadata of each volume of the first set, inspecting each allocation table of each volume of the second set, and inspecting each block of each volume in the third set.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: October 24, 2023
    Assignee: Red Hat, Inc.
    Inventors: Arik Hadas, Daniel Erez
  • Patent number: 11762578
    Abstract: A computer-implemented method that includes managing a buffer pool of pages into a ring sub-chain comprising pages linked in a ring, and a linear sub-chain comprising pages linked in a line from a header, and moving a page between the linear sub-chain and the ring sub-chain based on a moving schema evaluating a chain management characteristic.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: September 19, 2023
    Assignee: International Business Machines Corporation
    Inventors: Shuo Li, Xiaobo Wang, Sheng Yan Sun, Hong Mei Zhang
  • Patent number: 11755251
    Abstract: A system includes a virtual computational storage emulation module configured to provide a virtual computational storage device. The system further includes a storage element, where the virtual computational storage emulation module is configured to store data associated with the virtual computational storage device at the storage element. The system further includes a compute element. The virtual computational storage emulation module is configured to send a compute request associated with the virtual computational storage device to the compute element.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: September 12, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gayathiri Venkataraman, Vishwanath Maram, Matthew Shaun Bryson
  • Patent number: 11734440
    Abstract: A memory system component comprises transaction handling circuitry to receive memory access transactions. Each memory access transaction specifies at least: an issuing domain identifier which indicates an issuing security domain specified by an issuing master device for the memory access transaction, where the issuing security domain is one of a plurality of security domains; a target address; and a security check indication which indicates whether it is already known that the memory access transaction would pass a security checking procedure. The security checking procedure determines whether the memory access transaction indicating said issuing security domain is authorised to access the target address, based on control data indicative of which of the plurality of security domains are allowed to access the target address. The memory system component comprises control circuitry to determine, on the basis of the security check indication, whether the security checking procedure still needs to be performed.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: August 22, 2023
    Assignee: Arm Limited
    Inventor: Andrew Brookfield Swaine
  • Patent number: 11727022
    Abstract: Embodiments are disclosed for a method. The method includes receiving a plurality of local deltas for a query execution against a corresponding plurality of data sources hosted by a corresponding plurality of distributed nodes of a dynamic distributed network. The method also includes generating a combined delta by combining the local deltas. Additionally, the method includes generating a determined delta result by performing additional processing on the combined delta. Further, the method includes providing the determined delta for one of the distributed nodes.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Robert Neugebauer, Ian Richard Finlay, Glenn Patrick Steffler, Mohammad Wasif Khan
  • Patent number: 11704037
    Abstract: A plurality of different views of data associated with a storage domain stored on a deduplicated storage are traversed to determine data chunks belonging to each view of the plurality of different views of data associated with the storage domain. A request for a metric associated with disk space utilization of a group of one or more selected views of data included in the plurality of different views of data associated with the storage domain that are stored on the deduplicated storage is received. Data chunks belonging to the one or more selected views of data associated with the storage domain of the group but not other views of the plurality of different views of data associated with the storage domain that are stored on the deduplicated storage are identified. An incremental disk space utilization of the group is determined, including by determining a total size of the identified data chunks.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 18, 2023
    Assignee: Cohesity, Inc.
    Inventors: Anirvan Duttagupta, Shreyas Talele, Anubhav Gupta
  • Patent number: 11698859
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to receive a first request to allocate a direct swap file associated with an application stored in a system memory on a persistent storage media, and map a linear and continuous space of the persistent storage media to the direct swap file associated with the application in response to the first request. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: July 11, 2023
    Assignee: SK Hynix NAND Product Solutions Corp.
    Inventor: Mariusz Barczak
  • Patent number: 11681456
    Abstract: A method of reducing write amplification in an append-only memory store of data records, by which the store is subdivided into streams, each of which for storing records having an update frequency within a variable range of update frequencies. By defining an update frequency that does not rely on time, statistical methods can be used to select the streams in which data records can be written. The range of update frequencies of each stream can be fixed or variable and based on the stored records. The memory allocated to each stream can be determined based on numerically solving an optimization problem that determines the write amplification resulting from different memory allocations in the streams.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: June 20, 2023
    Assignee: HUAWEI CLOUD COMPUTING TECHNOLOGIES CO., LTD.
    Inventors: Per-Ake Larson, Alexandre Depoutovitch
  • Patent number: 11681471
    Abstract: The described technology is generally directed towards a streaming data storage system that can switch between a tiered mode of operation in which events are written to Tier-1 storage and later migrated to Tier-2 storage, and a direct mode of operation in which events are written to Tier-2 storage, bypassing the tiered mode. The switching from tiered mode to direct mode, and from direct mode to tiered mode, can be automatic and based on user configuration information. For example, an event size metric (e.g., average event size) can be evaluated against user defined thresholds to determine which mode to use. If the average event size goes below a low threshold value, the tiered mode is switched to and used for appending events to a segment of a data stream. If the average event size goes above a high threshold value, the direct mode is switched to and used.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: June 20, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventor: Andrei Paduroiu
  • Patent number: 11681625
    Abstract: Examples described herein can be used to allocate replacement receive buffers for use by a network interface, switch, or accelerator. Multiple refill queues can be used to receive identifications of available receive buffers. A refill processor can select one or more identifications from a refill queue and allocate the identifications to a buffer queue. None of the refill queues is locked from receiving identifications of available receive buffers but merely one of the refill buffers is accessed at a time to provide identifications of available receive buffers. Identifications of available receive buffers from the buffer queue are provide to the network interface, switch, or accelerator to store content of received packets.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: June 20, 2023
    Assignee: Intel Corporation
    Inventors: Linden Cornett, Parthasarathy Sarangam, Jesse Brandeburg
  • Patent number: 11669444
    Abstract: According to one embodiment, a computing system transmits to a storage device a write request designating a first logical address for identifying first data to be written and a length of the first data. The computing system receives from the storage device the first logical address and a first physical address indicative of both of a first block selected from blocks except a defective block by the storage device, and a first physical storage location in the first block to which the first data is written. The computing system updates a first table which manages mapping between logical addresses and physical addresses of the storage device and maps the first physical address to the first logical address.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: June 6, 2023
    Assignee: Kioxia Corporation
    Inventor: Shinichi Kanno
  • Patent number: 11652760
    Abstract: A buffer logic unit of a packet processing device including a power gate controller. The buffer logic unit for organizing and/or allocating available pages to packets for storing the packet data based on which of a plurality of separately accessible physical memories that pages are associated with. As a result, the power gate controller is able to more efficiently cut off power from one or more of the physical memories.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: May 16, 2023
    Assignee: Marvell Asia Pte., Ltd.
    Inventor: Enrique Musoll
  • Patent number: 11650747
    Abstract: Disclosed are various embodiments for high throughput reclamation of pages in memory. A first plurality of pages in a memory of the computing device are identified to reclaim. In addition, a second plurality of pages in the memory of the computing device are identified to reclaim. The first plurality of pages are prepared for storage on a swap device of the computing device. Then, a write request is submitted to a swap device to store the first plurality of pages. After submission of the write request, the second plurality of pages are prepared for storage on the swap device while the swap device completes the write request.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: May 16, 2023
    Assignee: VMware, Inc.
    Inventors: Emmanuel Amaro Ramirez, Marcos Kawazoe Aguilera, Pratap Subrahmanyam, Rajesh Venkatasubramanian
  • Patent number: 11620066
    Abstract: A method of operating a storage device with a memory includes partitioning an entire area of a first namespace into at least one area based on a reference size. The partitioning is performed in response to a namespace creating request from a host that includes size information corresponding to the entire area of the first namespace. The method further includes partitioning a logical address space of the memory into a plurality of segments, allocating a first segment of the plurality of segments to a first area of the at least one area, and storing mapping information of the first area and the first segment. A size of the logical address space is greater than a size of a physical storage space of the memory identified by the host.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: April 4, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaewon Song, Jaesub Kim, Sejeong Jang
  • Patent number: 11606555
    Abstract: A method for decoding a picture from a bitstream. The picture comprising a number of units, and the picture being partitioned into a number of spatial segments by a partition structure. The method includes decoding one or more code words in the bitstream; determining that the partition structure is uniform based on the one or more code words; determining the number of spatial segments based on the one or more code words; determining a segment unit size; and deriving the sizes and/or locations for spatial segments in the picture from the one or more code words. Deriving the sizes and/or locations for spatial segments in the picture comprises a first loop over the number of spatial segments in a first dimension or direction. A number of remaining segment units in the first dimension or direction to be segmented is calculated inside the first loop.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: March 14, 2023
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Mitra Damghanian, Martin Pettersson, Rickard Sjöberg
  • Patent number: 11586513
    Abstract: The disclosed technology provides techniques, systems, and apparatus for containing and recovering from uncorrectable memory errors in distributed computing environment through migration of virtual machines and associated memory to a target host machine. An aspect of the disclosed technology includes a hypervisor or virtual machine manager that receives signaling of an uncorrectable memory error detected by a host machine. The virtual machine manager then uses information received via the signaling to identify virtual memory addresses or memory pages associated with the corrupted memory element so as to allow for containment and recovery from the error, and for live migration of the virtual machine.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: February 21, 2023
    Assignee: Google LLC
    Inventors: Jue Wang, Qiuyi Jia, Adam Ruprecht
  • Patent number: 11550673
    Abstract: The disclosed technology provides techniques, systems, and apparatus for containing and recovering from uncorrectable memory errors in distributed computing environment. An aspect of the disclosed technology includes a hypervisor or virtual machine manager that receives signaling of an uncorrectable memory error detected by a host machine. The virtual machine manager then uses information received via the signaling to identify virtual memory addresses or memory pages associated with the corrupted memory element so as to allow for containment and recovery from the error.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: January 10, 2023
    Assignee: Google LLC
    Inventors: Jue Wang, Yi Cao
  • Patent number: 11546411
    Abstract: Systems and methods are described for backing up confidential data using user devices on the same local network. In an example, a first user device can download a data file from a server. The first user device can connect to the server on the same local network as a second user device. A user can select to delete the file from the first user device. The first user device can send the data file to the second user device using a local Internet Protocol (“IP”) address of the second user device. The second user device can store the data file on its local storage. If the user chooses to retrieve the data file to the first user device again, and if the user devices are on the same local network, the first user device can retrieve the data file from the second user device instead of the server.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: January 3, 2023
    Assignee: VMWARE, INC.
    Inventors: Pranav Ashok Shenoy, Mohammed Lazim
  • Patent number: 11514970
    Abstract: A memory device according to an embodiment includes first and second interconnects, memory cells, and a control circuit. In a first process, the control circuit applies a write voltage of a first direction to a memory cell coupled to selected first and second interconnects, and applies a write voltage of a second direction to a memory cell coupled to the selected first interconnect and a non-selected second interconnect. In second processes of first to m-th trial processes, the control circuit applies the write voltage of the second direction to the memory cell coupled to the selected first and second interconnects, and omits a write operation in which the memory cell coupled to the selected first interconnect and the non-selected second interconnect is targeted.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: November 29, 2022
    Assignee: Kioxia Corporation
    Inventors: Marina Yamaguchi, Kensuke Ota, Kazuhiko Yamamoto, Masumi Saitoh