Control Technique Patents (Class 711/154)
  • Patent number: 11360885
    Abstract: In an embodiment, a system includes a plurality of memory components that each include a plurality of management groups. Each management group includes a plurality of sub-groups. The system also includes a processing device that is operatively coupled with the plurality of memory components to perform wear-leveling operations that include maintaining a sub-group-level delta write count (DWC) for each of the sub-groups of each of the management groups of a memory component in the plurality of memory components. The wear-leveling operations also include determining, in connection with a write operation to a first sub-group of a first management group of the memory component, that a sub-group-level DWC for the first sub-group equals a management-group-move threshold, and responsively triggering a management-group-move operation from the first management group to a second management group of the memory component.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: June 14, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Paul Stonelake, Ning Chen, Fangfang Zhu, Alex Tang
  • Patent number: 11360701
    Abstract: A controller device is disclosed. The controller device comprises a communication interface that is configured to receive a data operation request via an interconnect bus. The controller device comprises an integrated interconnect protocol component that is configured to handle communication via the interconnect bus that supports coherency across a plurality of different processing devices external to the controller device. An integrated memory or storage controller component on the same controller device is configured to handle the data operation request including by being configured to manage communication with a memory or data storage device external to the controller device.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: June 14, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Narsing Krishna Vijayrao, Christian Markus Petersen
  • Patent number: 11354273
    Abstract: Embodiments are directed to managing data in a file system. The file system that includes storage nodes that may be associated with storage volumes that may have a different capacity for storing data. A storage capacity of the file system may be determined based on a number of stripes of data that fit in the file system such that each stripe may be comprised of chunks that have a same chunk storage capacity. Slots in the file system that each match the chunk storage capacity may be determined based on the storage volumes. The chunks may be assigned to the slots in the file system based on the capacity of the storage nodes such that a number of chunks allocated to a same storage volume or a same storage node may be based on protection factor information.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: June 7, 2022
    Assignee: Qumulo, Inc.
    Inventors: Kevin Ross O'Neill, Yuxi Bai, Tali Magidson, Philip Michael Bunge, Carson William Boden
  • Patent number: 11354038
    Abstract: Aspects of the present disclosure provide a computer-implemented method that includes providing a layered index to variable length data, the layered index comprising a plurality of layers. Each layer of the plurality of layers has an index array, a block offset array, and a per-block size array. The index array identifies a next level index of a plurality of indices or data. The indices represent a delta value from a first index of a block. The block offset array identifies a starting location of the index array. The per-block array identifies a shared integer size of a block of indices. The method further includes performing a random access read of the variable length data using the layered index.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: June 7, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jinho Lee, Frank Liu
  • Patent number: 11354056
    Abstract: A computing system having memory components of different tiers. The computing system further includes a controller, operatively coupled between a processing device and the memory components, to: receive from the processing device first data access requests that cause first data movements across the tiers in the memory components; service the first data access requests after the first data movements; predict, by applying data usage information received from the processing device in a prediction model trained via machine learning, second data movements across the tiers in the memory components; and perform the second data movements before receiving second data access requests, where the second data movements reduce third data movements across the tiers caused by the second data access requests.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: June 7, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Samir Mittal, Anirban Ray, Gurpreet Anand
  • Patent number: 11347633
    Abstract: A data storage system includes a memory device including a plurality of memory cells which are coupled to a plurality of row lines, and configured to communicate with a host device through at least one port; and a memory controller configured to select one of a first precharge policy and a second precharge policy according to a precharge control signal, and control the row lines based on access addresses for the row lines according to the selected precharge policy, wherein, under the first precharge policy, one of a first precharge scheme and a second precharge scheme is applied, and under the second precharge policy, both the first and second precharge schemes are applied at different times.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: May 31, 2022
    Assignee: SK hynix Inc.
    Inventor: Chan Jin Park
  • Patent number: 11347429
    Abstract: A memory system having memory components and a processing device to: communicate with a host system to obtain, from the host system, at least one host specified parameter during booting up of the host system; execute first firmware to process requests from the host system using the at least one host specified parameter, the requests including storing data into the memory components and retrieving data from the memory components; install second firmware while running the first firmware; store the at least one host specified parameter; and reboot into executing the second firmware using the at least one host specified parameter, without rebooting of the host system.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: May 31, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Alex Frolikov
  • Patent number: 11347297
    Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: May 31, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11347428
    Abstract: Techniques for processing I/O operations may include: receiving, at a data storage system, a write operation that writes first data to a target logical address of a log, wherein the data storage system includes a first storage tier of rotating non-volatile storage devices and a second tier of non-volatile solid state storage devices; storing the first data of the target logical address in a first level cache; destaging the first data from the first level cache to a first physical storage location in the first storage tier; and determining, in accordance with first read activity information for the target logical address, whether to store the first data for the target logical address in a second level cache including at least a portion of the non-volatile solid state storage devices of the second tier. The second level cache is a content addressable caching layer that caches data based on read activity.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: May 31, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Nickolay A. Dalmatov, Assaf Natanzon
  • Patent number: 11340617
    Abstract: Without human intervention, an autonomous mobile device (AMD) uses motors to move through a physical space along a path, controlled by computing devices processing sensor data from sensors. Electrical components such as the motors and computing devices generate heat during operation. The heat generated by the motors depends on time of operation and how hard they are being driven. The more data the computing devices process, the greater the heat generated. An overheated component may protectively shut down or fail. Internal temperature is used to constrain AMD movement. The AMD path may be planned to avoid overheating along the way and avoid arriving too hot to complete an expected or scheduled task at the destination. Speed along the path may be less than maximum speed, and the amount of data sent to the computing devices is also reduced, reducing heat generation and allowing the electrical components time to cool off.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: May 24, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Nadim Awad
  • Patent number: 11341035
    Abstract: Systems and methods for aligning needs of virtual devices with hardware resources. The performance of virtual devices are tested using different groupings to determine mappings or relationships between the virtual devices and the physical devices from which they are drawn. Based on the results of the tests, spindle groups can be optimized.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: May 24, 2022
    Assignee: EMC IP Holding Company LLC
    Inventor: Charles J. Hickey
  • Patent number: 11334480
    Abstract: An efficient control technology for non-volatile memory is shown. A non-volatile memory provides a storage space that is divided into blocks. When programming the write data issued by the host to the non-volatile memory, the programming order of the blocks is recorded. Garbage collection is based on the recorded programming order. Sequential data can be collected to the destination block in sequence.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: May 17, 2022
    Assignee: SILICON MOTION, INC.
    Inventors: Jie-Hao Lee, Yi-Kang Chang, Hsuan-Ping Lin
  • Patent number: 11336307
    Abstract: A memory system includes a nonvolatile semiconductor memory, and a controller configured to maintain a plurality of log likelihood ratio (LLR) tables for correcting data read from the nonvolatile semiconductor memory, determine an order in which the LLR tables are referred to, based on a physical location of a target unit storage region of a read operation, and carry out correcting of data read from the target unit storage region, using one of the LLR tables selected according to the determined order.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: May 17, 2022
    Assignee: KIOXIA CORPORATION
    Inventor: Takuya Haga
  • Patent number: 11327681
    Abstract: A memory system with at least one namespace includes a memory device and a controller. The memory device includes a plurality of single-level cell (SLC) buffers and a plurality of memory blocks, wherein each memory block includes a plurality of memory cells, each memory cell storing multi-bit data, and is allocated for a respective one of a plurality of zones, wherein each of the at least one namespace is divided by at least some of the plurality of zones. The controller is configured to receive a program request related to at least one application program executed by a host, to determine at least one zone designated by the at least one application program as an open state, and to control the memory device to perform a program operation on at least one memory block allocated for an open state zone.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: May 10, 2022
    Assignee: SK hynix Inc.
    Inventors: Hee Chan Shin, Young Ho Ahn, Yong Seok Oh, Jhu Yeong Jhin
  • Patent number: 11328756
    Abstract: A semiconductor system includes a controller configured to output a clock, a command and an address; and a semiconductor device configured to generate a flag signal by detecting an input time of the command, which is input in synchronization with the clock in a write auto-precharge operation based on the command, and configured to generate an internal address for performing the write auto-precharge operation, by serializing the address and then parallelizing the flag signal and the serialized address.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: May 10, 2022
    Assignee: SK hynix Inc.
    Inventors: Geun Ho Choi, Ki Hun Kwon
  • Patent number: 11327657
    Abstract: The present disclosure relates to a memory system and an operating method thereof. The memory system may include a shared memory device to store data, a sharing manager to store operation policy information and to autonomously generate a first internal command by using the operation policy information during an auto mode started in response to receiving an auto mode start command from a host, and a memory controller to generate a second internal command for controlling the shared memory device in response to the first internal command.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 10, 2022
    Assignee: SK hynix Inc.
    Inventor: Min Soo Lim
  • Patent number: 11321230
    Abstract: A memory system may include: a memory device including a plurality of memory dies suitable for storing data; and a controller operatively coupled to the memory dies of the memory device via a plurality of channels, the controller may be suitable for checking the plurality of the channels, selecting independently best transmission channels and best reception channels among the plurality of the channels according to states of the channels, requesting performing of command operations corresponding to the commands through the best transmission channels to the memory dies, and receiving performance results of the command operations through the best reception channels from the memory dies.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: May 3, 2022
    Assignee: SK hynix Inc.
    Inventors: Ik-Sung Oh, Jin-Woong Kim
  • Patent number: 11321242
    Abstract: Techniques for implementing early acknowledgement for translation lookaside buffer (TLB) shootdowns are provided. In one set of embodiments, a first (i.e., remote) processing core of a computer system can receive an inter-processor interrupt (IPI) from a second (i.e., initiator) processing core of the computer system for performing a TLB shootdown of the first processing core. Upon receiving the IPI, an interrupt handler of the first processing core can communicate an acknowledgement to the second processing core that the TLB of the first processing core has been flushed, prior to actually flushing the TLB.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: May 3, 2022
    Assignee: VMware, Inc.
    Inventors: Michael Wei, Nadav Amit, Amy Tai
  • Patent number: 11323246
    Abstract: A system stores transaction data in a ring chain architecture. A ring chain comprises blocks of data stored as a length-limited block chain in a ring buffer configuration. A block of transactions is stored on a ring chain until enough new blocks are added to overwrite the ring buffer with new blocks. The system stores multiple ring chains that update at varying frequencies. A new block on a lower frequency ring chain stores an aggregation of data from the blocks that were added to a higher frequency ring chain in the time since the previous addition of a block to the lower frequency ring chain. Thus, a system of ring chains stores progressively summarized state transition data over progressively longer time intervals while maintaining immutability of the record and reducing storage requirements.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: May 3, 2022
    Assignee: The Bank of New York Mellon
    Inventors: Daniel DeValve, Swaminathan Bhaskar, Hood Qaim-Maqami
  • Patent number: 11320987
    Abstract: Methods, systems, and devices for memory can include techniques for identifying first quantities of write counts for a first plurality of super management units (SMUs) in a mapped region of a memory sub-system, identifying, by a hardware component of the memory sub-system, a first SMU of the first plurality that includes a fewest quantity of write counts of the first quantity of write counts, and performing a wear-leveling operation based at least in part on a first quantity of write counts of the first SMU of the first plurality in the mapped region being less than a second quantity of writes counts of a second SMU of a second plurality of SMUs in an unmapped region of the memory sub-system.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: May 3, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Fangfang Zhu, Wei Wang, Jiangli Zhu, Ying Yu Tai
  • Patent number: 11316530
    Abstract: A method, system, and computer program product for data compression in storage clients. In some embodiments, a storage client for accessing a storage service from a computer program is provided. A compression method is provided in the storage client to reduce a size of data objects. A frequency of compressing data from the computer program or modifying a compression algorithm based on assessing costs and benefits of compressing the data is varied.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 26, 2022
    Assignee: International Business Machines Corporation
    Inventor: Arun Iyengar
  • Patent number: 11314450
    Abstract: A method of operating a storage device includes receiving, at the storage device, a meta information transfer command based on a data read request. The meta information transfer command is received from a host device. The method further includes receiving, at the storage device, a data read command corresponding to the data read request and the meta information transfer command. The data read command is received from the host device. The method further includes receiving, at the storage device, a plurality of meta data corresponding to the data read request and the meta information transfer command. The plurality of meta data is received from the host device. The method further includes performing a data read operation, at the storage device, based on the data read command and the plurality of meta data.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: April 26, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dong-Woo Kim, Dong-Min Kim, Song-Ho Yoon, Wook-Han Jeong
  • Patent number: 11311801
    Abstract: A cloud gaming system includes a storage system and a compute system connected through a PCIe fabric. The compute system generates a command buffer for a read operation, writes the command buffer to compute system memory, and notifies the storage system about the command buffer. The storage system reads the command buffer in the compute system memory and processes the command buffer to read requested data. In one embodiment, the storage system writes the requested data in the compute system memory and notifies the compute system about the requested data in the compute system memory, and the compute system reads the requested data from its memory. In another embodiment, the storage system writes the requested data in the storage system memory and notifies the compute system about the requested data in the storage system memory, and the compute system reads the requested data from the storage system memory.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: April 26, 2022
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Roelof Roderick Colenbrander
  • Patent number: 11307794
    Abstract: Memory systems, memory controllers, and operation methods of the memory systems are disclosed. In one example aspect, the memory system may suspend a target operation, such as a program operation or an erase operation, based on whether or not to execute a first operation of resetting a reference read bias when a failure occurs in a read operation executed after the target operation is suspended, and a number of times the target operation is suspended. In this way, the memory system may reduce a delay associated with the suspension of program operations and erasure operations.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: April 19, 2022
    Assignee: SK HYNIX INC.
    Inventors: Seung Gu Ji, Hyung Min Lee
  • Patent number: 11308432
    Abstract: An approach is provided for generating an interface. Using natural language processing and natural language understanding, ingredients are derived from audio data from a customer's spoken order of ingredients of a food item. Locations of the ingredients in a preparation area are identified. A location of an employee is determined. Distances of the ingredient locations to the employee location are calculated. A sequence of selecting the ingredients is determined so that the employee selecting the ingredients in the sequence optimizes a speed at which the food item is prepared. An overlay to an interface for viewing in an augmented reality (AR) headset worn by the employee is generated and displayed. The overlay includes indicators overlaying an image of the preparation area. The indicators mark the ingredients, mark the sequence of selecting the ingredients, and distinguish the ingredients from other ingredients in the preparation area.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: April 19, 2022
    Assignee: International Business Machines Corporation
    Inventors: Jacob Thomas Covell, Clarissa Ho, Robert Huntington Grant, Zachary A. Silverstein
  • Patent number: 11301150
    Abstract: The present technology relates to an electronic device. A memory controller according to the present technology has improved map update performance. The memory controller controls a memory device that stores logical to physical map data indicating a mapping relationship between a logical address and a physical address of data. The memory controller includes a map data storage and a map data manager. The map data storage stores physical to logical (P2L) map data generated based on a logical address corresponding to a request received from a host. The map data manager performs a map update operation for the L2P map data by using some of an entire P2L map data stored in the map data storage, according to an amount of the P2L map data stored in the map data storage.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: April 12, 2022
    Assignee: SK hynix Inc.
    Inventor: Hye Mi Kang
  • Patent number: 11301140
    Abstract: A storage server includes an interface to a storage over fabric network, a plurality of input/output (I/O) queues (IOQs), a plurality of non-volatile data storage devices to store data received from a host computer system over the interface to the storage over fabric network, and a processor to set a maximum number of the IOQs to be provisioned for the host computer system and a maximum depth of the IOQs to be provisioned for the host computer system.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: April 12, 2022
    Assignee: Intel Corporation
    Inventors: Phil C. Cayton, Rajalaxmi Angadi, David B. Minturn
  • Patent number: 11294892
    Abstract: A database-management system (DBMS) archives a record of a database table by updating the record's unique “Archived” field. This indicates that the record should be considered to have been archived despite the fact that the record has not been physically moved to a distinct archival storage area. When a query requests access to the table, the DBMS determines whether the query requests access to only archived data, only active data, or both. If both, the DBMS searches the entire table. Otherwise, the DBMS scans each record's Archived field to consider only those records that satisfy the query's requirement for either archived or active data. If the DBMS incorporates Multi-Version Concurrency Control (MVCC) technology, the DBMS combines this procedure with MVCC's time-based version-selection mechanism.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: April 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Luis Eduardo Oliveira Lizardo, Felix Beier, Knut Stolze, Reinhold Geiselhart
  • Patent number: 11294820
    Abstract: A memory sub-system configured to manage programming mode transitions to accommodate a constant size of data transfer between a host system and a memory sub-system. The memory sub-system counts single-page transitions of atomic programming modes performed within a memory sub-system and determines whether or not to allow any two-page transition of atomic programming modes based on whether an odd or even number of the single-page transitions have been counted. When an odd number of the transitions have been counted, no two-page transition is allowed; otherwise, one or more two-page transitions are allowable. A next transition of atomic programming modes is selected based on the determining of whether or not to allow any two-page transitions.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 5, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Sanjay Subbarao, James Fitzpatrick
  • Patent number: 11288049
    Abstract: Described are various embodiments of a source-to-source compiler, compilation method, and computer-readable medium for predictable memory management. One embodiment is described as a memory management system operable on input source code for an existing computer program, the system comprising: a computer-readable medium having computer-readable code portions stored thereon to implement, when executed, a deterministic memory manager (DMM), wherein said code portions comprise smart pointer code portions and associated node pointer code portions for implementing a smart pointer that automatically corrects for memory misallocations in target memory allocation source code portions.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: March 29, 2022
    Inventor: Philippe Bouchard
  • Patent number: 11288142
    Abstract: The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modem stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modem stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: March 29, 2022
    Assignee: salesforce.com, inc.
    Inventors: Elden Gregory Bishop, Jeffrey Chao
  • Patent number: 11288001
    Abstract: Aspects include receiving a request from a requesting system to move data from a source memory on a source system to a target memory on a target system. The receiving is at a first hardware engine configured to access the source memory and the target memory. In response to receiving the request, the first hardware engine reads the data from the source memory and writes the data to the target memory. In response to the reading being completed, the first hardware engine transmits a data clearing request to a second hardware engine that is configured to access the source memory. The data clearing request specifies a location of the data in the source memory to be cleared.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: March 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Scot Rider, Marcel Schaal
  • Patent number: 11275683
    Abstract: Example embodiments of the present disclosure provide a method, an apparatus, a device and a computer-readable storage medium for storage management. The method for storage management includes: obtaining an available channel mode of a plurality of channels in a memory of a data processing system, the available channel mode indicating availabilities of the plurality of channels, and each of the plurality of channels being associated with a set of addresses in the memory; obtaining a channel data-granularity of the plurality of channels, the channel data-granularity indicating a size of a data block that can be carried on each channel; obtaining a target address of data to be transmitted in the memory; and determining a translated address corresponding to the target address based on the available channel mode and the channel data-granularity.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: March 15, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xianglun Leng, Yong Wang, Wei Qi, Zhengze Qiu, Yang Yan
  • Patent number: 11269764
    Abstract: A storage system and method for adaptive scheduling of background operations are provided. In one embodiment, after a storage system completes a host operation in the memory, the storage system remains in a high power mode for a period of time, after which the storage system enters a low-power mode. The storage system estimates whether there will be enough time to perform a background operation in the memory during the period of time without the background operation being interrupted by another host operation. In response to estimating that there will be enough time to perform the background operation in the memory without the background operation being interrupted by another host operation, the storage system performs the background operation in the memory.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: March 8, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Judah Gamliel Hahn, Alexander Bazarsky, Ariel Navon, David Gur
  • Patent number: 11270745
    Abstract: A method of foreground auto-calibrating data reception window for a DRAM system is disclosed. The method comprises receiving data strobe and data from a DRAM of the DARM system, capturing a data strobe clock according to the received data strobe, generating three time points with a period of the data strobe clock, sampling the data at the three time points, to obtain three sampled data, determining whether to adjust positions of the three time points according to a comparison among the three sampled data, and configuring the valid data reception window according to the positions of the three time points when determining not to adjust the positions of the three time points.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: March 8, 2022
    Assignee: Realtek Semiconductor Corp.
    Inventors: Shih-Chang Chen, Chun-Chi Yu, Chih-Wei Chang, Kuo-Wei Chi, Fu-Chin Tsai, Shih-Han Lin, Gerchih Chou
  • Patent number: 11269615
    Abstract: Methods, apparatus, and processor-readable storage media for automatically orchestrating deployments of software-defined storage stacks are provided herein. An example computer-implemented method includes obtaining a software-defined storage deployment request from at least one user; determining a request type associated with the software-defined storage deployment request by processing at least a portion of payload content of the software-defined storage deployment request; orchestrating one or more tasks required for carrying out the requested software-defined storage deployment based at least in part on the determined request type and the processed payload content; and performing at least one automated action based at least in part on the one or more orchestrated tasks.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: March 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Alexander Hoppe, Alik Saring, Ian D. Bibby, Trevor H. Dawe, Sean R. Gallacher
  • Patent number: 11269559
    Abstract: According to one embodiment, a data processing device including a user space including a user space thread including a plurality of coroutines and a file system. The file system is configured to: allocate a plurality of processes generated by an application to the plurality of coroutines; check the plurality of coroutines in order; when a first process included in the plurality of processes is allocated to a first coroutine included in the plurality of coroutines, write a first IO request based on the first process in a submission queue; and when the submission queue is filled, or when checking the plurality of coroutines is finished, transmit the first IO request written in the submission queue to a storage device.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: March 8, 2022
    Assignee: Kioxia Corporation
    Inventors: Hidekazu Tadokoro, Takeshi Ishihara, Yohei Hasegawa
  • Patent number: 11264082
    Abstract: A memory device comprises a first memory area including a first memory cell array having a plurality of first memory cells each for storing N-bit data, where N is a natural number, and a first peripheral circuit for controlling the first memory cells according to an N-bit data access scheme and disposed below the first memory cell array, a second memory area including a second memory cell array having a plurality of second memory cells each for storing M-bit data, where M is a natural number greater than N, and a second peripheral circuit for controlling the second memory cells according to an M-bit data access scheme and disposed below the second memory cell array, wherein the first memory area and the second memory area are included in a single semiconductor chip and share an input and output interface, and a controller configured to generate calculation data by applying a weight stored in the first memory area to sensing data in response to receiving the sensing data obtained by an external sensor, and sto
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: March 1, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Taehong Kwon, Daeseok Byeon, Chanho Kim, Taehyo Kim
  • Patent number: 11256749
    Abstract: A graph data processing method and a distributed system is disclosed. The distributed system includes a master node and a plurality of worker nodes. The master node obtains master node graph data, divides the graph data to obtain P shards, where the P shards include a first shard and a second shard. The master node determines at least two edge sets from each shard, schedules at least two edge sets included in the first shard to at least two worker nodes for processing, and schedules an associate edge set included in the second shard to the at least two worker nodes for processing, where the associate edge set is an edge set that includes an outgoing edge of a target vertex corresponding to the first shard.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: February 22, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yinglong Xia, Jian Xu, Mingzhen Xia
  • Patent number: 11252065
    Abstract: The disclosure describes methods and systems for performing time synchronization in a heterogeneous system. In one example, a method includes, for each secondary device of one or more secondary devices in a network, determining, by a computing system, one or more time synchronization characteristics for the respective secondary device; and generating, by the computing system and based on at least the respective one or more time synchronization characteristics for each respective secondary device of the one or more secondary devices in the network, a time synchronization report for the network, wherein the one or more time synchronization characteristics include health data for the one or more secondary device.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: February 15, 2022
    Assignee: Equinix, Inc.
    Inventors: Yakov Kamen, Yury Kamen, Alex Wilms, Ankur Sharma, David Gofman, Danjue Li, Stanley Chernavsky
  • Patent number: 11249648
    Abstract: Various implementations described herein relate to systems and methods for defining an optimal transfer and processing unit (OTPU) size for communicating messages for a plurality of non-volatile memory (NVM) sets of a non-volatile memory of the SSD. Each of the plurality of NVM sets corresponds to one of a plurality of regions of the non-volatile memory. Each of the plurality of regions includes a plurality of dies.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: February 15, 2022
    Assignee: Kioxia Corporation
    Inventor: Amit Rajesh Jain
  • Patent number: 11249915
    Abstract: Methods and systems are disclosed for populating a fail-over cache. When host computer systems in a system each have a content based read cache, the methods and system provide several functions applied in different orders for determining blocks that are to be included in the fail-over cache. Each function attempts a different strategy for combining the contents of the caches of each host computer system into the fail-over cache. If any strategy is successful, then the fail-over cache is placed into service. If all of the strategies fail, then an eviction strategy is employed in which blocks are evicted from each cache until the combination of caches meets a requirement of the fail-over cache, which, in one embodiment, is the size of the fail-over cache.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: February 15, 2022
    Assignee: VMware, Inc.
    Inventors: Vikas Suryawanshi, Kashish Bhatia, Zubraj Singha
  • Patent number: 11249722
    Abstract: A semiconductor device includes a dynamic reconfiguration processor that performs data processing for input data sequentially input and outputs the results of data processing sequentially as output data, an accelerator including a parallel arithmetic part that performs arithmetic operation in parallel between the output data from the dynamic reconfiguration processor and each of a plurality of predetermined data, and a data transfer unit that selects the plurality of arithmetic operation results by the accelerator in order and outputs them to the dynamic reconfiguration processor.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: February 15, 2022
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Taro Fujii, Takao Toi, Teruhito Tanaka, Katsumi Togawa
  • Patent number: 11249687
    Abstract: Every time a node computer receives multidimensional data from a data source, the node computer which has received the multidimensional data in a computer system: writes the multidimensional data; reads the multidimensional data; analyzes the read multidimensional data; and outputs a result of the analysis. Such writing of the multidimensional data is data locality EC processing (Erasure Coding with data locality). The data locality EC processing is to: write all of one or more data chunks constituting the multidimensional data to the node computer; and write a parity of the data chunk, with respect to each of the one or more data chunks, to a node computer(s) other than the node computer.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: February 15, 2022
    Assignee: HITACHI, LTD.
    Inventor: Tomonori Esaka
  • Patent number: 11243846
    Abstract: Embodiments for replicating data in a disaggregated computing system. A memory pool is allocated, where the memory pool includes allocated memory elements at a first site and allocated memory elements at a second site. The allocated memory elements are mapped at the first site to the allocated memory elements at the second site. A replication operation is initiated to mirror data stored within the allocated memory elements at the first site to the allocated memory elements at the second site. The allocated memory elements at the first site are directly connected through an independent networking connection to the allocated memory elements at the second site such that the replication operation is processed exclusively through compute resources at the first site.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: February 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Valentina Salapura, John A. Bivens, Min Li, Ruchi Mahindru, Eugen Schenfeld
  • Patent number: 11244421
    Abstract: Memories and methods for storing untransformed primitive blocks of variable size in a memory structure of a graphics processing system, the untransformed primitive blocks having been generated by geometry processing logic of the graphics processing system. The method includes: storing an untransformed primitive block in the memory structure, and increasing, by a predetermined amount, a current total amount of memory allocated for storing untransformed primitive blocks; determining an unused amount of the current total amount of memory allocated for storing untransformed primitive blocks; receiving a new untransformed primitive block for storing in the memory structure, and determining whether a size of the new untransformed primitive block is less than or equal to the unused amount; and if it is determined that the size of the new untransformed primitive block is less than or equal to the unused amount, storing the new untransformed primitive block in the memory structure.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: February 8, 2022
    Assignee: Imagination Technologies Limited
    Inventor: Robert Brigg
  • Patent number: 11237980
    Abstract: A file page table management method. The file page table management method is applied to a storage system in which a file system is created in a memory. According to the file page table management method, a mapping manner of a file page table can be dynamically adjusted based on an access type of an access request for accessing the memory, thereby improving memory access efficiency and saving memory space.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: February 1, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Huan Zhang, Jun Xu, Guanyu Zhu
  • Patent number: 11238152
    Abstract: Some examples relate generally to computer architecture software for data classification and information security and, in some more particular aspects, to verifying audit events in a file system.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: February 1, 2022
    Assignee: Rubrik, Inc.
    Inventors: Di Wu, Chenyang Zhou, Shanthi Kiran Pendyala
  • Patent number: 11237734
    Abstract: An apparatus having memory dies with a memory cell array divided into a plurality of data segments. A stagger circuit selects a common command signal and sets a column access signal to select a data segment to be accessed based on the common command signal and/or an individual command signal to perform a memory operation corresponding to the selected common command signal on the selected data segment. A data bus connects the memory cell arrays to form data units with each data unit including a data segment from each memory cell array and configured such that the data segments are connected in parallel to the data bus and use a same line of the data bus. The stagger circuits are configured such that data segments identified for activation in the plurality of memory dies are not part of a same data unit.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: February 1, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Yuan He
  • Patent number: 11238920
    Abstract: One example of the present disclosure includes performing a comparison operation in memory using a logical representation of a first value stored in a first portion of a number of memory cells coupled to a sense line of a memory array and a logical representation of a second value stored in a second portion of the number of memory cells coupled to the sense line of the memory array. The comparison operation compares the first value to the second value, and the method can include storing a logical representation of a result of the comparison operation in a third portion of the number of memory cells coupled to the sense line of the memory array.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: February 1, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kyle B. Wheeler, Troy A. Manning, Richard C. Murphy