Patents Examined by Edward J Dudek
  • Patent number: 11614878
    Abstract: The present disclosure includes apparatuses and methods for data movement. An example apparatus includes a memory device that includes a plurality of subarrays of memory cells and sensing circuitry coupled to the plurality of subarrays. The sensing circuitry includes a sense amplifier and a compute component. The memory device also includes a plurality of subarray controllers. Each subarray controller of the plurality of subarray controllers is coupled to a respective subarray of the plurality of subarrays and is configured to direct performance of an operation with respect to data stored in the respective subarray of the plurality of subarrays. The memory device is configured to move a data value corresponding to a result of an operation with respect to data stored in a first subarray of the plurality of subarrays to a memory cell in a second subarray of the plurality of subarrays.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: March 28, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Perry V. Lea, Glen E. Hush
  • Patent number: 11609707
    Abstract: Technologies are provided for supporting multi-actuator storage device access using logical addresses. Separate sets of logical addresses (such as logical block addresses) can be associated with different actuators of a storage device. For example, a first set of logical addresses can be assigned to storage locations on one or more storage media that is/are accessible using a first actuator of the storage device and a second set of logical addresses can be assigned to storage locations on one or more storage media that is/are accessible using a second actuator of the storage device. The storage device can receive a data access request containing a logical address and can identify a logical address set to which the logical address belongs. The storage device can use an actuator associated with the logical address set to access a storage location assigned to the logical address.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: March 21, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Keun Soo Jo, Munif M. Farhan, Andrew Kent Warfield, Seth W. Markle, Roey Rivnay
  • Patent number: 11604712
    Abstract: A method is provided for a hyper-converged storage-compute system to implement an active-active failover architecture for providing Internet Small Computer System Interface (iSCSI) target service. The method intelligently selects multiple hosts to become storage nodes that process iSCSI input/output (I/O) for a target. The method further enables iSCSI persistent reservation (PR) to handle iSCSI I/Os from multiple initiators.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: March 14, 2023
    Assignee: VMWARE, INC.
    Inventors: Zhaohui Guo, Yang Yang, Haitao Zhou, Jian Zhao, Zhou Huang, Jin Feng
  • Patent number: 11604609
    Abstract: Methods, systems, and devices for techniques for command sequence adjustment are described. A memory system or a host system may adjust an order of a set commands in a queue if the memory system or host system determines that a subset of the commands in the queue are part of a test mode, for example by determining whether each command of the subset corresponds to a same size of data. The set of commands may be reordered such that the subset of commands associated with the test mode are continuous or back-to-back. In some cases, the subset of commands associated with test mode may be reordered such that logical addresses (e.g., logical block addresses) of the subset of commands are continuous.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: March 14, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Yanhua Bi
  • Patent number: 11604596
    Abstract: A storage device may include: a memory device including a plurality of memory blocks; a buffer memory device to store event information; and a memory controller configured to: upon occurrence of the predetermined event while a write operation, store, in the buffer memory device, the event information for the event page, and control the memory device to perform a test read operation to read at least one page in the plurality of memory blocks except the event page, based on the event information; upon failure of the test read operation, control the memory device to perform a migration operation of moving, to a replacement block, data stored in valid pages except a page on which the test read operation has fails among pages included in a memory block on which the test read operation fails.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: March 14, 2023
    Assignee: SK hynix Inc.
    Inventors: Tae Ha Kim, Jee Yul Kim, Hyeong Ju Na
  • Patent number: 11599476
    Abstract: Embodiments of the present disclosure relate to a memory system and an operating method thereof. According to the embodiments of the present disclosure, the memory system may monitor, in a state in which an address mapping information corresponding to a target device capable of inputting and outputting data corresponding to a specific address is first address mapping information, a first performance pattern which is an performance pattern for the target device, input information on the first performance pattern to an artificial intelligence engine which analyzes the performance pattern based on an artificial intelligence model and outputs address mapping information for the target device, and remaps a second address mapping information, which is the address mapping information output by the artificial intelligence engine, into address mapping information corresponding to the target device.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: March 7, 2023
    Assignee: SK hynix Inc.
    Inventor: Sang Don Yoon
  • Patent number: 11599269
    Abstract: Reducing file write latency includes receiving incoming data, from a data source, for storage in a file and a target storage location for the incoming data, and determining whether the target storage location corresponds to a cache entry. Based on at least the target storage location not corresponding to a cache entry, the incoming data is written to a block pre-allocated for cache misses and the writing of the incoming data to the pre-allocated block is journaled. The writing of the incoming data is acknowledged to the data source. A process executing in parallel with the above commits the incoming data in the pre-allocated block with the file. Using this parallel process to commit the incoming data in the file removes high-latency operations (e.g., reading pointer blocks from the storage media) from a critical input/output path and results in more rapid write acknowledgement.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: March 7, 2023
    Assignee: VMware, Inc.
    Inventors: Prasanth Jose, Gurudutt Kumar Vyudayagiri Jagannath
  • Patent number: 11599477
    Abstract: A method, system, and computer program product for maintaining a cache obtain request data associated with a plurality of previously processed requests for aggregated data; predict, based on the request data, (i) a subset of the aggregated data associated with a subsequent request and (ii) a first time period associated with the subsequent request; determine, based on the first time period and a second time period associated with a performance of a data aggregation operation that generates the aggregated data, a third time period associated with instructing a memory controller managing a cache to evict cached data stored in the cache and load the subset of the aggregated data into the cache; and provide an invalidation request to the memory controller managing the cache to evict the cached data stored in the cache and load the subset of the aggregated data into the cache during the third time period.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: March 7, 2023
    Assignee: Visa International Service Association
    Inventors: Abhinav Sharma, Sonny Thanh Truong
  • Patent number: 11593213
    Abstract: Systems, methods, and machine-storage medium for classifying snapshot image processing are described. The system receives read requests to read snapshot information. Each read request includes an offset identifying a storage location and a length. The snapshot information includes snapshots including a full snapshot and at least one incremental snapshot. The read requests include a first read request to read data from the snapshot information. The system generates a first plurality of read events including a second plurality of read events that are generated by processing the first read request. The second plurality of read events includes first and a second read events. The system identifies whether utilizing a cache optimizes the job based on the first plurality of read events.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: February 28, 2023
    Assignee: Rubrik, Inc.
    Inventors: Jonathan Youngha Yoo, Adam Gee, Vivek Sanjay Jain, Junyong Lee
  • Patent number: 11593285
    Abstract: A memory system includes a memory device, a memory controller configured to control the memory device, and an interface device configured to perform an interfacing operation for transmission of a control signal and data between the memory device and the memory controller. The interface device activates a blocking function for the interfacing operation in response to a configuration command of the memory controller including a blocking activation signal and performs an interface configuration operation in response to an interface configuration command of the memory controller while the blocking function is activated.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: February 28, 2023
    Assignee: SK hynix Inc.
    Inventor: Chang Kyun Park
  • Patent number: 11586362
    Abstract: A master profile may be created defining a plurality of values for a plurality of storage system parameters. The master profile may be stored and applied to a plurality of storage systems. In some embodiments, one or more values defined in the master profile may be changed and the resulting plurality of parameter values stored in a new master profile. Current values of storage system parameters may be monitored, for example, determined according to a predefined schedule or in response to user input, and the current values may be compared against the values defined in the master profiles. The results of these comparisons may be recorded as part of compliance information that indicates the extent of compliance of the parameter values of a storage system with the master profile parameter values. The compliance information may be included as part of a compliance report, notification or some other communication.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 21, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Finbarr O'Riordan, Audrey O'Sullivan, Tim O'Connor, Derek Barrett, Anna Odziemczyk, Sean Flanagan
  • Patent number: 11586538
    Abstract: Provided is a storage device including a power management integrated circuit chip; multiple non-volatile memories configured to receive power from the power management integrated circuit chip; and a controller configured to control the non-volatile memories, wherein the controller checks a state of the power during a read operation and a write operation on the non-volatile memories and, when a power failure is detected in at least one of the non-volatile memories, implements a power failure detection mode regarding the read operation and the write operation on all of the non-volatile memories.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: February 21, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kwang-Kyu Bang, Young-Min Kim, Kwan-Bin Yim
  • Patent number: 11586354
    Abstract: The role of a node component of a distributed application may be changed without the need to terminate a current OS process implementing the node component. A first component on a first node of a distributed file server may be designated as a control path master and configured to execute a first group of services defined for the control path master as part of a first OS process. One or more other components on one or more other nodes of the distributed file server may be designated as a control path agent and configured to execute a second group of services defined for the control path agent as part of a respective second OS process. The control path master may be changed to a control path agent, and a control path agent may be changed to a control path master, without having to reboot the control path component in question.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 21, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Bathulwar Akash, Piyush Tibrewal, Aditya Sriram Mattaparthi, Suprava Das
  • Patent number: 11586358
    Abstract: Systems and methods for building file system images using cached logical volume snapshots. An example method may comprise: producing a buildroot descriptor in view of a list of identifiers of software packages to be included into a new file system image; and responsive to locating, in a storage memory, a logical volume snapshot associated with the buildroot descriptor, creating the new file system image using the logical volume snapshot.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: February 21, 2023
    Assignee: Red Hat, Inc.
    Inventors: Michael Simacek, Miroslav Suchy
  • Patent number: 11586910
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes computation nodes at multiple layers. The NNIC includes multiple value computation circuits for computing output values of computation nodes. The NNIC includes a set of memories for storing the output values of computation nodes for use as input values to computation nodes in subsequent layers of the neural network. The NNIC includes a set of write control circuits for writing the computed output values to the set of memories. Upon receiving a set of computed output values, a write control circuit (i) temporarily stores the set of computed output values in a cache when adding the set of computed output values to the cache does not cause the cache to fill up and (ii) writes data in the cache to the set of memories when the cache fills up.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: February 21, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11586363
    Abstract: In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may boot an operating system; after booting the operating system, determine that a solid state drive has been hot added to a Peripheral Component Interconnect Express (PCIe) port; suppress discovery of the solid state drive by the operating system; determine a policy associated with the solid state drive; determine that a current configuration associated with the solid state drive does not match a configuration associated with the policy associated with the solid state drive; determine that the configuration associated with the policy can be applied to the solid state drive; apply the configuration associated with the policy to the solid state drive without utilizing the operating system; and inform the operating system that the solid state drive has been communicatively coupled to at least one processor via a PCIe root complex.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: February 21, 2023
    Assignee: Dell Products L.P.
    Inventors: Rajeswari Ayyaswamy, Senthil Kumar Parangusam, James Peter Giannoules, Sheshadri Pathpalya Raghavendra Rao, Aniruddha Suresh Herekar
  • Patent number: 11579771
    Abstract: A composite layout to store one or more extents of a data object in a first storage system and one or more extents of the data object in a second, different storage system. The first storage system may be configured for the efficient storage of small chunks of data such as, e.g., chunks of data small than the addressable block size of the storage devices used by the storage systems.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: February 14, 2023
    Assignee: Seagate Technology LLC
    Inventors: John Michael Bent, Nikita Danilov, Kenneth K. Claffey, Raj Bahadur Das
  • Patent number: 11580036
    Abstract: An apparatus includes a processor, configured to designate a memory region in a memory, and to issue (i) memory-access commands for accessing the memory and (ii) a conditional-fence command associated with the designated memory region. Memory-Access Control Circuitry (MACC) is configured, in response to identifying the conditional-fence command, to allow execution of the memory-access commands that access addresses within the designated memory region, and to defer the execution of the memory-access commands that access addresses outside the designated memory region, until completion of all the memory-access commands that were issued before the conditional-fence command.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: February 14, 2023
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ilan Pardo, Shahaf Shuler, George Elias, Nizan Atias, Adi Maymon
  • Patent number: 11573725
    Abstract: A storage system includes an object storage server and a storage client, the object storage server obtains an object migration policy of a source bucket, where the object migration policy indicates a condition for migrating an object from the source bucket to a destination bucket in a plurality of buckets, and the object storage server migrates a first object in the source bucket to the destination bucket according to the policy migration policy when determining that the first object meets the object migration policy of the source bucket.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: February 7, 2023
    Assignee: HUAWEI CLOUD COMPUTING TECHNOLOGIES CO., LTD.
    Inventors: Shugang Tian, Pingchang Bai
  • Patent number: 11573722
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to provide an interface to a pooled memory that is configured as a combination of local memory and remote memory, wherein the remote memory is shared between multiple compute nodes, allocate respective memory portions of the pooled memory to respective tenants, associate respective memory balloons with the respective tenants that correspond to the allocated respective memory portions, and manage the respective memory balloons based on the respective tenants and two or more memory tiers associated with the pooled memory. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: February 7, 2023
    Assignee: Intel Corporation
    Inventors: Rasika Subramanian, Lidia Warnes, Francesc Guim Bernat, Mark A. Schmisseur, Durgesh Srivastava