Patents Examined by Jae U Yu
  • Patent number: 12159046
    Abstract: A data storage method and apparatus includes receiving a data write request, where the data write request carries to-be-written data, and the to-be-written data includes at least one data block; calculating a fingerprint of each data block, where the fingerprint uniquely identifies the data block; determining whether the fingerprint of each data block exists in a fingerprint list, where the fingerprint list includes a fingerprint corresponding to a data block stored in a high-speed storage medium and a fingerprint corresponding to a data block stored in a low-speed storage medium; and performing a deduplication operation on the to-be-written data.
    Type: Grant
    Filed: December 8, 2023
    Date of Patent: December 3, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Zhi Rao
  • Patent number: 12159214
    Abstract: Some embodiments provide a method for executing a neural network. The method writes a first input to a first set of physical memory banks in a unified memory shared by an input processing circuit and a neural network inference circuit that executes the neural network. While the neural network inference circuit is executing the network a first time to generate a first output for the first input, the method writes a second input to a second set of physical memory banks in the unified memory. The neural network inference circuit executes a same set of instructions to read the first input from the first set of memory banks in order to execute the network the first time and to read the second input from the second set of memory banks in order to execute the network a second time to generate a second output for the second input.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: December 3, 2024
    Assignee: Perceive Corporation
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee
  • Patent number: 12153539
    Abstract: An append operation is provided for using a plurality of threads on a plurality of streaming multiprocessors of a graphical processing unit. The append operation writes results into a result buffer. Executing the append operation comprises claiming, by each given thread within the plurality of threads having a result to write, a portion of a selected WCB, writing, by the given thread, the result to the portion of the selected WCB, and in response to a flush condition being met for the selected WCB, copying contents of the selected WCB to a result buffer.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: November 26, 2024
    Assignee: Oracle International Corporation
    Inventors: Kangnyeon Kim, Weiwei Gong, James Kearney, Harshada Chavan
  • Patent number: 12147696
    Abstract: Disclosed are various embodiments for garbage collection for object-based storage systems. A first set of objects stored by an object storage service that have been accessed within a previously defined date range is identified. Then, a second set of objects stored by the object storage service is identified based at least in part on a relationship to one or more objects in the first set of objects. Next, a third set of objects stored by the object storage service that have been created prior to a predefined date is identified. Then, a subset of objects which are members of the third set of objects and not members of the first set of objects or the second set of objects is identified. Finally, a retention action is performed on individual members of the subset of objects based at least in part on a retention policy.
    Type: Grant
    Filed: December 4, 2023
    Date of Patent: November 19, 2024
    Assignee: American Express Travel Related Services Company, Inc.
    Inventors: Lakshman Chaitanya, Arindam Chatterjee, Pratap Singh Singh Rathore, Shourya Roy, Nitish Sharma, Swatee Singh, Mohammad Torkzahrani
  • Patent number: 12147344
    Abstract: Disclosed herein are methods, systems, and processes to provide coherency across disjoint caches in clustered environments. It is determined whether a data object is owned by an owner node, where the owner node is one of multiple nodes of a cluster. If the owner node for the data object is identified by the determining, a request is sent to the owner node for the data object. However, if the owner node for the data object is not identified by the determining, selects a node in the cluster is selected as the owner node, and the request for the data object is sent to the owner node.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: November 19, 2024
    Assignee: Veritas Technologies LLC
    Inventors: Bhushan Jagtap, Mark Hemment, Anindya Banerjee, Ranjit Noronha, Jitendra Patidar, Kundan Kumar, Sneha Pawar
  • Patent number: 12135648
    Abstract: A data prefetching method and apparatus, and related storage device are provided. Data samples are collected. An AI chip trains the data samples to obtain a prefetching model. The AI chip then sends the prefetching model to a processor. The processor reads to-be-read data into a cache based on the prefetching model to reduce the computing burden of the processor.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: November 5, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Peng Lu, Jinhu Liu
  • Patent number: 12105992
    Abstract: A method of a flash memory controller includes: using a processor to issue and generate a command signal into a control logic circuit though a bus; buffering the command signal in a specific queue of a specific channel controller of the I/O circuit; and using the arbitrator to control the specific buffer storing a first transmission history information of the specific communication interface.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: October 1, 2024
    Assignee: Silicon Motion, Inc.
    Inventors: Tsu-Han Lu, Hsiao-Chang Yen
  • Patent number: 12099721
    Abstract: A system is disclosed. The system may include a computer system, which may include a processor that may execute instructions of an application that accesses an object using an object command, and a memory storing the instructions of the application. The computer system may also include a conversion module to convert the object command to a key-value (KV) command. Finally, the system may include a storage device storing data for the object and processing the object using the KV command.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: September 24, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anand Subramanian, Oscar Prem Pinto
  • Patent number: 12093179
    Abstract: A microprocessor includes a load/store unit that performs store-to-load forwarding, a PIPT L2 set-associative cache, a store queue having store entries, and a load queue having load entries. Each L2 entry is uniquely identified by a set index and a way. Each store/load entry holds, for an associated store/load instruction, a store/load physical address proxy (PAP) for a store/load physical memory line address (PMLA). The store/load PAP specifies the set index and the way of the L2 entry into which a cache line specified by the store/load PMLA is allocated. Each load entry also holds associated load instruction store-to-load forwarding information. The load/store unit compares the store PAP with the load PAP of each valid load entry whose associated load instruction is younger in program order than the store instruction and uses the comparison and associated forwarding information to check store-to-load forwarding correctness with respect to each younger load instruction.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 17, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12087365
    Abstract: Modulation of the source voltage in a NAND-flash array read waveform can enable improved read-disturb mitigation. For example, increasing the source line voltage to a voltage with a magnitude greater than the non-idle source voltage during the read operation when the array is idle (e.g., not during sensing) enables a reduction in read disturb without the complexity arising from the consideration of multiple read types. Additional improvement in FN disturb may also be obtained on the sub-blocks in the selected SGS by increasing the source line voltage during the selected wordline ramp when the array is idle.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: September 10, 2024
    Assignee: Intel NDTM US LLC
    Inventor: Narayanan Ramanan
  • Patent number: 12079126
    Abstract: A microprocessor includes a cache memory, a store queue, and a load/store unit. Each entry of the store queue holds store data associated with a store instruction. The load/store unit, during execution of a load instruction, makes a determination that an entry of the store queue holds store data that includes some but not all bytes of load data requested by the load instruction, cancels execution of the load instruction in response to the determination, and writes to an entry of a structure from which the load instruction is subsequently issuable for re-execution an identifier of a store instruction that is older in program order than the load instruction and an indication that the load instruction is not eligible to re-execute until the identified older store instruction updates the cache memory with store data.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 3, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12079129
    Abstract: A microprocessor includes a physically-indexed physically-tagged second-level set-associative cache. A set index and a way uniquely identifies each entry. A load/store unit, during store/load instruction execution: detects that a first and second portions of store/load data are to be written/read to/from different first and second lines of memory specified by first and second store physical memory line addresses, writes to a store/load queue entry first and second store physical address proxies (PAPs) for first and second store physical memory line addresses (and all the store data in store execution case). The first and second store PAPs comprise respective set indexes and ways that uniquely identifies respective entries of the second-level cache that holds respective copies of the respective first and second lines of memory. The entries of the store queue are absent storage for holding the first and second store physical memory line addresses.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 3, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12073220
    Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: August 27, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12066938
    Abstract: A cache memory circuit that evicts cache lines based on which cache lines are storing background data patterns is disclosed. The cache memory circuit can store multiple cache lines and, in response to receiving a request to store a new cache line, can select a particular one of previously stored cache lines. The selection may be performed based on data patterns included in the previously stored cache lines. The cache memory circuit can also perform accesses where the internal storage arrays are not activated in response to determining data in the location specified by the requested address is background data. In systems employing virtual addresses, a translation lookaside buffer can track the location of background data in the cache memory circuit.
    Type: Grant
    Filed: July 27, 2023
    Date of Patent: August 20, 2024
    Assignee: Apple Inc.
    Inventor: Michael R. Seningen
  • Patent number: 12066943
    Abstract: The present disclosure is suitable for the field of hardware chip design, and particularly relates to an alias processing method and system based on L1D-L2 caches and a related device. A method for solving an alias problem of the L1D cache based on a L1D cache-L2 cache structure and a corresponding system module are disclosed. The method provided by the present disclosure can maximize hardware resource efficiency, without limiting a chip structure, a hardware system type, an operating system compatibility and a chip performance, and meanwhile, the module realized based on the cache cannot greatly increase power consumption of the whole system, thus having good expandability.
    Type: Grant
    Filed: November 20, 2023
    Date of Patent: August 20, 2024
    Assignee: Rivai Technologies (Shenzhen) Co., Ltd.
    Inventors: Muyang Liu, Rong Chen, Zhilei Yang
  • Patent number: 12066935
    Abstract: A central processing unit (CPU) system including a CPU core can include an adaptive cache compressor, which is capable of monitoring a miss profile of a cache. The adaptive cache compressor can compare the miss profile to a miss threshold. Based on this comparison, the adaptive cache compressor can determine whether to enable compression of the cache.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: August 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, Alper Buyuktosunoglu, Brian Robert Prasky, Deanna Postles Dunn Berger
  • Patent number: 12067260
    Abstract: A method of processing transactions associated with a command in a storage system is provided. The method includes receiving, at a first authority of the storage system, a command relating to user data. The method includes sending a transaction of the command, from the first authority to a second authority of the storage system, wherein a token accompanies the transaction and writing data in accordance with the transaction as permitted by the token into a partition that is allocated to the second authority in a storage device of the storage system.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: August 20, 2024
    Assignee: PURE STORAGE, INC.
    Inventors: John Hayes, Robert Lee, Igor Ostrovsky, Peter Vajgel
  • Patent number: 12066936
    Abstract: An example cache memory includes a schedule module, control modules, a datapath, and an output module. The cache memory receives requests to read and/or write cache lines. The schedule module maintains a queue of the requests. The schedule module may assign the requests to the control modules based on the queue. A control module, which receives a request, controls the datapath to execute the request, i.e., to read or write the cache line. The control module can control the execution by the datapath from start to end. Multiple control modules may control parallel executions by the datapath. The output module outputs, e.g., to a processor, responses of the cache memory to the requests after the executions. A response may include a cache line. The cache memory may include a buffer that temporarily stores cache lines before the output to avoid deadlock in the datapath during the parallel executions of requests.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: August 20, 2024
    Assignee: Habana Labs Ltd.
    Inventors: Ehud Eliaz, Eitan Joshua, Yori Teichman, Ofer Eizenberg
  • Patent number: 12067267
    Abstract: A system includes a semiconductor device configured to store data and a controller communicatively coupled to the semiconductor device. The semiconductor device and the controller are configured to; in response to determining that particular data stored in the semiconductor device satisfies a reliability condition, obtain first readout data by reading the particular data at a first read voltage, and obtain second readout data by reading the particular data at a second read voltage. The second read voltage is different from the first read voltage. The semiconductor device and the controller are configured to compare the first readout data and the second readout data and obtain a comparison result; and, based on the comparison result, determine whether to perform an error correction process on the particular data.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: August 20, 2024
    Assignee: Macronix International Co., Ltd.
    Inventors: Shih-Chou Juan, Wei-Yan Jang
  • Patent number: 12067252
    Abstract: Upon determining that a reference condition is satisfied, a storage device may determine target memory dies among a plurality of memory dies included in the storage device on the basis of temperatures of the plurality of memory dies, and then write data according to the determination of the target memory dies. For example, the storage device may write data to the target memory dies in an interleaving manner, may write data to a memory die that is not a target memory die only when data is not being written to any other of the plurality of memory dies, or both. The reference condition may relate to a temperature of the storage device.
    Type: Grant
    Filed: February 1, 2023
    Date of Patent: August 20, 2024
    Assignee: SK hynix Inc.
    Inventor: Jang Hun Yun