Prioritizing Patents (Class 711/158)
-
Patent number: 12079516Abstract: System and techniques for host-preferred memory operation are described herein. At a memory-side cache of a memory device that includes accelerator hardware, a first memory operation can be received from a host. A determination that the first memory operation corresponds to a cache set based on an address of the first memory operation is made. A second memory operation can be received from the accelerator hardware. Another determination can be made that the second memory operation corresponds to the cache set. Here, the first memory operation can be enqueued in a host queue of the cache set and the second memory operation can be enqueued in an internal request queue of the cache set. The first memory operation and the second memory operation can be executed as each is dequeued.Type: GrantFiled: August 30, 2022Date of Patent: September 3, 2024Assignee: Micron Technology, Inc.Inventors: Tony M. Brewer, Dean E. Walker
-
Patent number: 12067261Abstract: A serial presence detect (SPD) device includes a region of nonvolatile memory for SPD data and an additional region for other (e.g., vendor) use. The additional region may be subdivided into write protect regions that can be individually and independently write protected. To configure the write protection, a password key scheme is used to enter a mode whereby the write protection attributes may be configured. Another password key scheme is used to exit the write protection configuration mode.Type: GrantFiled: July 5, 2022Date of Patent: August 20, 2024Assignee: Rambus Inc.Inventors: Aws Shallal, Chen Chen
-
Patent number: 12050917Abstract: Instruction information generation circuitry generates instruction information. Instruction information storage circuitry comprises a plurality of elements having physical sub-elements configured to temporarily store units of instruction information. Allocation circuitry is configured to receive, from the instruction information generation circuitry, given instruction information. It determines a mapping of a plurality of ordered virtual sub-elements, such that each virtual sub-element maps onto a respective one of said physical sub-elements. The given instruction information is stored into the virtual sub-elements of a given element, according to the mapping, such that at least one virtual sub-element lower in said order has a higher priority than at least one virtual sub-element higher in said order. Sub-element deactivation circuitry is configured to track usage of said virtual sub-elements across the plurality of elements and adaptively deactivate virtual sub-elements.Type: GrantFiled: December 30, 2021Date of Patent: July 30, 2024Assignee: Arm LimitedInventors: Houdhaifa Bouzguarrou, Thibaut Elie Lanois, Guillaume Bolbenes, Jonatan Christoffer Lövgren
-
Patent number: 12045497Abstract: One or more embodiments of the present specification provide disk storage-based data reading methods, apparatuses, and systems. A data reading instruction sent by a client device is received. The data reading instruction includes a service attribute. Location information corresponding to the service attribute is obtained from a pre-stored index table. The location information includes block heights and offsets of data blocks in which one or more data records are located. A block height sequence is generated by sequentially arranging the block heights. Mutually exclusive continuous block height intervals are determined from the block height sequence. One or more target data blocks are read corresponding to a block height interval from a disk. The one or more data records are obtained by querying the one or more target data blocks based on the location information, and returned to the client device.Type: GrantFiled: April 18, 2022Date of Patent: July 23, 2024Assignee: Ant Blockchain Technology (Shanghai) Co., Ltd.Inventor: Xinying Yang
-
Patent number: 12038856Abstract: A memory controller includes a memory channel controller that uses multiple groups of command queue and arbiter pairs. Each arbiter is coupled to a respective command queue to select memory access commands from each command queue according to predetermined criteria. Each arbiter selects from among the memory access requests in each command queue independently based on the predetermined criteria and sends selected memory access requests to a selector that serves as a second level arbiter which sends the request to a memory subchannel.Type: GrantFiled: October 7, 2022Date of Patent: July 16, 2024Assignee: ADVANCED MICRO DEVICES, INC.Inventors: James R. Magro, Kedarnath Balakrishnan, Brendan T. Mangan
-
Patent number: 12032858Abstract: A data storage system continually monitors a loading level of processing requests from host computers relative to a predetermined threshold. In response to the loading level not exceeding a predetermined threshold, a first identification request is responded to with a full response identifying all data blocks over a first complete range of a first bulk storage operation. In response to the loading level exceeding the predetermined threshold, a second identification request is responded to with a partial response identifying a subset of data blocks over only a portion of a second complete range of a second bulk storage operation. The partial response causes a host to first process the subset of data blocks and then send an additional identification request for additional blocks of the second complete range, effectively reducing the rate of bulk storage operations and their effect on other, latency-sensitive operations such as reads and writes.Type: GrantFiled: March 13, 2023Date of Patent: July 9, 2024Assignee: Dell Products L.P.Inventors: Vasudevan Subramanian, Vamsi K. Vankamamidi, Maher Kachmar
-
Patent number: 12026107Abstract: A command control system is provided which is configured to optimally set an output timing of a RAS command and an output timing of a CAS command for access requests different from each other. The command control system is configured to, when an output timing of a second RAS command is set in a first cycle time period which is a cycle starting from the reference time point, determine whether or not the second RAS command is output to a storage device in the first cycle time period in accordance with whether or not an output timing of a first CAS command is set in a second cycle time period constituted by a prescribed number of the cycles subsequent to the reference time point.Type: GrantFiled: July 8, 2020Date of Patent: July 2, 2024Assignee: PANASONIC AUTOMOTIVE SYSTEMS CO., LTD.Inventor: Kazuhito Tanaka
-
Patent number: 12019908Abstract: Some examples described herein provide a buffer memory pool circuitry that comprises a plurality of buffer memory circuits that store an entry identifier, a payload portion, and a next-entry pointer. The buffer memory pool circuitry further comprises a processor configured to identify an allocation request for a first virtual channel associated with a sequence of buffer memory circuits and comprising a start pointer identifying an initial buffer memory circuit. The processor is further configured to program the first virtual channel circuit based on setting the start pointer for the first virtual channel circuit to be equal to the entry identifier of the initial buffer memory circuit. The processor is also configured to monitor usage. A length of the sequence of buffer memory circuits of the first virtual channel circuit is defined by a start pointer for a second virtual channel circuit subsequent to the first virtual channel circuit.Type: GrantFiled: July 29, 2021Date of Patent: June 25, 2024Assignee: XILINX, INC.Inventors: Krishnan Srinivasan, Shishir Kumar, Sagheer Ahmad, Abbas Morshed, Aman Gupta
-
Patent number: 12013940Abstract: Automatic detection of software that performs unauthorized privilege escalation is disclosed. Examples disclosed herein include detecting, in an event log, a first event associated with a start of execution of a process, the first event to identify a first privilege level associated with the process, and storing the first privilege level in a data structure associated with the process. Disclosed examples also include detecting, in the event log by executing an instruction with the at least one processor, a subsequent second event associated with the execution of the process, the second event to identify a second privilege level associated with the process. Disclosed examples further include at least one of terminating, pausing or suspending the process in response to the second privilege level being higher than the first privilege level.Type: GrantFiled: November 2, 2020Date of Patent: June 18, 2024Assignee: McAfee, LLCInventor: Eknath Venkataramani
-
Patent number: 12001342Abstract: A computing system having memory components, including first memory and second memory. The computing system further includes a processing device, operatively coupled with the memory components, to: store a memory allocation value in association with a context of executing instructions; execute a set of instructions in the context; allocate, for execution of the set of instructions in the context, an amount of memory, including an amount of the first memory and an amount of the second memory; and access the amount of the second memory via the amount of the first memory during the execution of the set of instructions in the context.Type: GrantFiled: March 11, 2022Date of Patent: June 4, 2024Assignee: Micron Technology, Inc.Inventors: Anirban Ray, Parag R. Maharana
-
Patent number: 12003561Abstract: An end user premises device is provided that includes a memory, one or more transceivers, and one or more processors. The one or more transceivers are configured to communicate with one or more stations in a network and a client device. The one or more processors are configured to receive a first user request for data from the client device using the one or more transceivers, determine a first point in time for retrieving the data based on an amount of charge in batteries of the one or more stations in the network, retrieve, at the first point in time, the data from a remote server via the network using the one or more transceivers, store the data in the memory, and in response to a second user request, transmit the data to the client device using the one or more transceivers.Type: GrantFiled: October 19, 2022Date of Patent: June 4, 2024Assignee: Aalyria Technologies, Inc.Inventors: Brian Barritt, Sharath Ananth
-
Patent number: 11995007Abstract: A multi-bus protocol memory controller is disclosed. The memory controller utilizes shim circuits to translate between the various bus protocols used in the System on a Chip (SoC) and the bus protocol used by the memory controller. The use of shim circuits reduces the number of bridges required in the SoC and also increases performance. The memory controller is designed such that it may interface with any bus protocol, requiring only the design and inclusion of a shim circuit for that bus protocol.Type: GrantFiled: November 18, 2022Date of Patent: May 28, 2024Assignee: Silicon Laboratories Inc.Inventors: Paul Ivan Zavalney, Rejoy Roy Mathews
-
Patent number: 11994992Abstract: Provided is a takeover method for cache partition recovery, including: determining whether a cluster has a four-controller topology, and when having the four-controller topology, setting a four-controller topology flag for each cache partition of the cluster; in response to monitoring that the cluster is changed to a cluster having a dual-controller topology and including a first node and a second node, determining whether a third node and a fourth node that exit the cluster belong to a same sub-cluster, and when belonging to the same sub-cluster, further determining whether cache partitions of the sub-cluster are set with the four-controller topology flag; and when set with the four-controller topology flag, further determining whether the sub-cluster is in a single-partition mode or dual-partition mode, and respectively taking over, by the first node and the second node, the third node and the fourth node based on the single-partition mode or dual-partition mode.Type: GrantFiled: April 29, 2022Date of Patent: May 28, 2024Assignee: SHANDONG YINGXIN COMPUTER TECHNOLOGIES CO., LTD.Inventors: Hongsheng Hou, Wenzhi Liu
-
Patent number: 11989142Abstract: An accelerator is disclosed. A circuit may process a data to produce a processed data. A first tier storage may include a first capacity and a first latency. A second tier storage may include a second capacity and a second latency. The second capacity may be larger than the first capacity, and the second latency may be slower than the first latency. A bus may be used to transfer at least one of the data or the processed data between the first tier storage and the second tier storage.Type: GrantFiled: January 27, 2022Date of Patent: May 21, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Marie Mai Nguyen, Rekha Pitchumani, Zongwang Li, Yang Seok Ki, Krishna Teja Malladi
-
Patent number: 11989444Abstract: A memory controller that controls a nonvolatile memory in response to commands from a host includes a normal transfer queue and a priority transfer queue, a transfer packet priority determination unit, a transfer queue selector, and a transfer packet selector. The transfer packet priority determination unit determines whether a transfer packet is a priority packet based on transmission information of the transfer packet. The transfer queue selector selects the priority transfer queue and stores the transfer packet in the priority transfer queue if the transfer packet is determined as a priority packet, and selects the normal transfer queue and stores the transfer packet in the normal transfer queue if the transfer packet is not determined as a priority packet. The transfer packet selector transfers to the host a priority packet stored in the priority transfer queue preferentially with respect to a normal packet stored in the normal transfer queue.Type: GrantFiled: June 16, 2023Date of Patent: May 21, 2024Assignee: Kioxia CorporationInventor: Daisuke Uchida
-
Patent number: 11984105Abstract: Disclosed herein are systems and method for minimizing fan noise during a data backup. In one exemplary aspect, a method may comprise initiating, at a computing device, a data backup at a first backup rate, wherein the computing device comprises a fan that regulates an internal temperature of the computing device. The method may comprise calculating a noise level of the fan. The method may comprise comparing the noise level to a threshold noise level. In response to determining that the noise level exceeds the threshold noise level based on the comparison, the method may comprise reducing a backup rate of the data backup to a second backup rate, such that the noise level equals the threshold noise level. The method may comprise performing the data backup at the second backup rate.Type: GrantFiled: June 3, 2021Date of Patent: May 14, 2024Assignee: Acronis International GmbHInventors: Vladimir Simonov, Serguei Beloussov, Stanislav Protasov
-
Patent number: 11983121Abstract: Provided is a cache memory device including a command reception unit for packetizing each of read commands and write commands and classifying them as even or odd; a cache scheduler comprising a first reorder scheduling queue for receiving commands classified as even numbers from the command reception unit and scheduling the commands classified as even numbers for cache memory accesses and a second reorder scheduling queue for receiving commands classified as odd numbers from the command reception unit and scheduling the commands classified as odd numbers for cache memory accesses; and an access execution unit for performing cache memory accesses via a cache tag to scheduled commands classified as even numbers and scheduled commands classified as odd numbers.Type: GrantFiled: November 15, 2023Date of Patent: May 14, 2024Assignee: METISX CO., LTD.Inventors: Do Hun Kim, Keebum Shin, Kwangsun Lee
-
Patent number: 11977525Abstract: A method, system and computer-readable storage medium for transferring data segments from one computer system to a second computing system. Prior to transfer of the data segments, the first system calculates compressibility ratio of each segment and compares the compressibility ratio to a preset threshold. Based on the comparison, the first system assigns a compressibility hint to each segment. The first system transfers the segments to the second system, together with the corresponding compressibility hint. The second system stores each segment in a compressible region or in a non-compressible region based on the hint. Then the second system compresses the compressible region and stores the compressed region in a container, and stores the non-compressible region uncompressed in the container.Type: GrantFiled: March 4, 2021Date of Patent: May 7, 2024Assignee: EMC IP HOLDING COMPANY LLCInventors: Jagannathdas Rath, Kalyan C. Gunda
-
Patent number: 11967393Abstract: A semiconductor device includes a clock gating circuit and a control circuit. The clock gating circuit outputs a gated clock signal based on a clock signal. Transitions of the clock signal are output in the gated clock signal in response to a clock enable signal having an enable value and are disabled from being output in the gated clock signal in response to the clock enable signal having a disable value. The control circuit includes a first portion that operates based on the clock signal. The first portion sets the clock enable signal to the disable value in response to a disable control and sets the clock enable signal to the enable value in response to a wakeup control. The control circuit includes a second portion that operates based on the gated clock signal. The second portion provides the disable control to the first portion during an operation.Type: GrantFiled: September 10, 2021Date of Patent: April 23, 2024Assignee: Yangtze Memory Technologies Co., Ltd.Inventors: Jian Luo, Zhuqin Duan
-
Patent number: 11961547Abstract: Methods, systems, and devices for techniques for memory system refresh are described. In some cases, a memory system may prioritize refreshing blocks of memory cells containing control information for the file system of the memory system. For example, the memory system may identify a block of memory cells containing control information and adjust an error threshold for refreshing the blocks of memory cells to be lower than an error threshold for refreshing the blocks of memory cells containing data other than control information. Additionally or alternatively, the memory system may perform a refresh control operation for the block of memory cells with a higher frequency (e.g., more frequently) than for other blocks of memory cells.Type: GrantFiled: February 9, 2022Date of Patent: April 16, 2024Assignee: Micron Technology, Inc.Inventors: Qi Dong, Poorna Kale
-
Patent number: 11960728Abstract: An interface circuit of a memory device including a plurality of memory dies including a plurality of registers corresponding to the plurality of memory dies, respectively, the plurality of registers each configured to store command information related to a data operation command, a demultiplexer circuit configured to provide input command information to a selected register from among the plurality of registers according to at least one of a first address or a first chip selection signal, the input command information being received from outside the interface circuit, and a multiplexer circuit configured to receive output command information from the selected register from among the plurality of registers and output the output command information according to at least one of a second address or a second chip selection signal may be provided.Type: GrantFiled: November 29, 2021Date of Patent: April 16, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Daehoon Na, Jangwoo Lee, Jeongdon Ihm
-
Patent number: 11940934Abstract: An accelerator is disclosed. A circuit may process a data to produce a processed data. A first tier storage may include a first capacity and a first latency. A second tier storage may include a second capacity and a second latency. The second capacity may be larger than the first capacity, and the second latency may be slower than the first latency. A bus may be used to transfer at least one of the data or the processed data between the first tier storage and the second tier storage.Type: GrantFiled: January 27, 2022Date of Patent: March 26, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Marie Mai Nguyen, Rekha Pitchumani, Zongwang Li, Yang Seok Ki, Krishna Teja Malladi
-
Patent number: 11941300Abstract: Methods, systems, and devices for integrating a pivot table in a logical-to-physical mapping of a memory system are described. The memory system may receive a read command and read a first entry of a first subset of mapping and a second entry of a second subset of mapping. The second entry may include at least a portion of a pivot table associated with physical addresses of a non-volatile memory device. The memory system may retrieve data from a physical address identified in the pivot table, rather than access a different portion of the logical-to-physical mapping. The memory system may transmit, to a host system, the data retrieved from the physical address identified in the pivot table.Type: GrantFiled: October 21, 2022Date of Patent: March 26, 2024Inventors: Giuseppe D'Eliseo, Luca Porzio, Stephen Hanna
-
Patent number: 11921564Abstract: In one embodiment, an apparatus includes: a port circuit to receive a configuration write from a source circuit; a save restore memory coupled to the port circuit to store information of a plurality of control and status registers (CSRs); and a configuration network coupled to the port circuit, the configuration network coupled to a plurality of nodes, each of the plurality of nodes comprising at least one CSR. The port circuit may be configured to send the configuration write to a first node of the plurality of nodes and to the save restore memory. Other embodiments are described and claimed.Type: GrantFiled: February 28, 2022Date of Patent: March 5, 2024Assignee: Intel CorporationInventor: Deepak Rameshkumar Tanna
-
Patent number: 11899972Abstract: A partition command from one of a plurality of write partition command queues or a plurality of read partition command queues is received. The received partition command is issued to a command processor of the sequencer component to be applied to one of the one or more memory devices. Responsive to receiving the partition command of the plurality of write partition command queues, whether a timeout threshold criterion pertaining to the plurality of read partition command queues is satisfied is determined. Responsive to determining that the timeout threshold criterion pertaining to the plurality of read partition command queues is not satisfied, whether a write threshold criterion pertaining to the plurality of write partition command queues is satisfied is determined.Type: GrantFiled: August 19, 2021Date of Patent: February 13, 2024Assignee: Micron Technology, Inc.Inventors: Juane Li, Fangfang Zhu, Jason Duong, Chih-Kuo Kao, Jiangli Zhu
-
Patent number: 11893251Abstract: A non-transitory computer-readable medium is disclosed, the medium having instructions stored thereon that are executable by a computer system to perform operations that may include allocating a plurality of storage locations in a system memory of the computer system to a buffer. The operations may further include selecting a particular order for allocating the plurality of storage locations into a cache memory circuit. This particular order may increase a uniformity of cache miss rates in comparison to a linear order. The operations may also include caching subsets of the plurality of storage locations of the buffer using the particular order.Type: GrantFiled: August 31, 2021Date of Patent: February 6, 2024Assignee: Apple Inc.Inventors: Rohit Natarajan, Jurgen M. Schulz, Christopher D. Shuler, Rohit K. Gupta, Thomas T. Zou, Srinivasa Rangan Sridharan
-
Patent number: 11875152Abstract: A method for generating a thread queue, that includes obtaining, by a user space file system, central processing unit (CPU) socket data, and based on the CPU socket data, generating a plurality of thread handles for a plurality of cores, ordering the plurality of thread handles, in the thread queue, for a first core of the plurality of cores, and saving the thread queue to a region of shared memory.Type: GrantFiled: October 30, 2020Date of Patent: January 16, 2024Assignee: EMC IP HOLDING COMPANY LLCInventor: Adrian Michaud
-
Patent number: 11868267Abstract: A system includes a first memory component having a particular access size associated with performance of memory operations, a second memory component to store a logical to physical data structure whose entries map management segments to respective physical locations in the memory component, wherein each management segment corresponds to an aggregated plurality of logical access units having the particular access size, and a processing device, operably coupled to the memory component. The processing device can perform memory management operations on a per management segment basis by: for each respective management segment, tracking access requests to constituent access units corresponding to the respective management segment, and determining whether to perform a particular memory management operation on the respective management segment based on the tracking.Type: GrantFiled: March 30, 2022Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Edward C. McGlaughlin, Gary J. Lucas, Joseph M. Jeddeloh
-
Patent number: 11868273Abstract: Embodiments are directed to memory protection with hidden inline metadata to indicate data type and capabilities. An embodiment of a processor includes a processor core and cache memory. The processor core is to implant hidden inline metadata in one or more cachelines for the cache memory, the hidden inline metadata hidden at a linear address level, hidden from software, the hidden inline metadata to indicate data type or capabilities for the associated data stored on the same cacheline.Type: GrantFiled: June 29, 2019Date of Patent: January 9, 2024Assignee: Intel CorporationInventor: David M. Durham
-
Patent number: 11853569Abstract: Various embodiments set forth techniques for cache warmup. The techniques determining, by a node, identities of one or more target storage blocks of a plurality of storage blocks managed by a storage system, where the node previously cached metadata corresponding to the one or more target storage blocks; receiving the metadata corresponding to the one or more target storage blocks; and storing the metadata corresponding to the one or more target storage blocks in a cache memory of the node.Type: GrantFiled: April 22, 2021Date of Patent: December 26, 2023Assignee: NUTANIX, INC.Inventors: Mohammad Mahmood, Aman Gupta, Gaurav Jain, Anoop Jawahar, Prateek Kajaria
-
Patent number: 11853618Abstract: Techniques for RAID reconstruction involve: determining, from a task list, multiple stripes in a RAID that are involved in a to-be-processed task within a current task window, the task list including an external I/O request task and an internal reconstruction I/O request task, and each stripe including data on a first number of data disks and data on a second number of parity disks; reading data from the multiple stripes into a read buffer; and if data of the first number of data disks in a stripe among the multiple stripes has already been read into the read buffer, performing the internal reconstruction I/O request task on the stripe. Such a technique helps to increase the processing power and efficiency of the data storage system to recover the reconstruction of RAID stripes while coping with external I/O requests.Type: GrantFiled: November 17, 2021Date of Patent: December 26, 2023Assignee: EMC IP Holding Company LLCInventors: Qian Wu, Bo Hu, Jing Ye
-
Patent number: 11853251Abstract: Disclosed are techniques for chip-to-chip (C2C) serial communications, such as communications between chiplets on a multi-chip package. In some aspects, a method of on-die monitoring of C2C links comprises detecting a change of the C2C link from a first link state to a second link state and storing link state change information in an on-die first-in, first-out (FIFO) buffer. The link state change information indicates the first link state, the duration of time the C2C link was in the first link state, and the speed of the C2C link in the first link state. Upon detecting a request for link state change information, link state change information is retrieved from the FIFO buffer and transmitted serially to an output pin of the die, such as a general purpose input/output (GPIO) pin.Type: GrantFiled: May 4, 2022Date of Patent: December 26, 2023Assignee: QUALCOMM IncorporatedInventors: Ramesh Krishnamurthy Madhira, Ibrahim Ouda, Kaushik Roychowdhury
-
Patent number: 11853586Abstract: Techniques are disclosed herein for improved copy data management functionality in storage systems. For example, a method receives copy usage data for one or more data copies associated with a storage array, wherein the copy usage data is indicative of a usage associated with each of the one or more data copies, and updates the one or more data copies with one or more usage tags based on the received copy usage data. Further, the method may then scan the one or more usage tags associated with each of the one or more data copies, select one or more storage tiers for at least a portion of the one or more data copies based on the scanning of the one or more usage tags, and cause at least a portion of the one or more data copies to be stored in the selected one or more storage tiers.Type: GrantFiled: October 20, 2020Date of Patent: December 26, 2023Assignee: EMC IP Holding Company LLCInventor: Sunil Kumar
-
Patent number: 11842051Abstract: Techniques are provided for implementing intelligent defragmentation in a storage system. A storage control system manages a logical address space of a storage volume. The logical address space is partitioned into a plurality of extents, wherein each extent comprises a contiguous block of logical addresses of the logical address space. The storage control system monitors input/output (I/O) operations for logical addresses associated with the extents, and estimates fragmentation levels of the extents based on metadata associated with the monitored I/O operations. The storage control system identifies one or more extents as candidates for defragmentation based at least on the estimated fragmentation levels of the extents.Type: GrantFiled: January 25, 2022Date of Patent: December 12, 2023Assignee: Dell Products L.P.Inventors: Michal Yarimi, Itay Keller
-
Patent number: 11843745Abstract: There is provided an information processing apparatus that enables readout of data compressed in a mount format. An information processing apparatus includes a mount unit configured to mount compressed data, a decompression unit configured to decompress a compressed file having access information to access the data mounted by the mount unit, and a readout unit configured to read out the mounted data by reading out the file decompressed by the decompression unit.Type: GrantFiled: November 2, 2021Date of Patent: December 12, 2023Assignee: Canon Kabushiki KaishaInventor: Yohei Shogaki
-
Patent number: 11836374Abstract: A storage system uses blocks of memory that are sized larger than a size of a zone. This means that the storage system stores multiple zones in a given block. Storing zones with different zone properties in a given block can be problematic, so the storage system obtains zone property information for each zone and stores zones with similar zone properties in a given block.Type: GrantFiled: July 8, 2022Date of Patent: December 5, 2023Assignee: Western Digital Technologies, Inc.Inventors: Rotem Sela, Einav Zilberstein, Asher Druck
-
Patent number: 11822487Abstract: A memory management unit (MMU) including a unified translation lookaside buffer (TLB) supporting a plurality of page sizes is disclosed. In one aspect, the MMU is further configured to store and dynamically update page size residency metadata associated with each of the plurality of page sizes. The page size residency metadata may include most recently used (MRU) page size data and/or a counter for each page size indicating how many pages of that page size are resident in the unified TLB. The unified TLB is configured to determine an order in which to perform a TLB lookup for at least a subset of page sizes of the plurality of page sizes based on the page size residency metadata.Type: GrantFiled: December 1, 2021Date of Patent: November 21, 2023Assignee: Ampere Computing LLCInventors: George Van Horn Leming, III, John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Bret Leslie Toll
-
Patent number: 11822481Abstract: A semiconductor device includes: a first cache that includes a first memory and rewrite flags that indicate whether rewriting has been performed for each piece of data held in the first memory; and a second cache that includes a second memory and a third memory that has a lower writing speed than the second memory, stores data evicted from the first cache in the second memory when a rewrite flag corresponding to the evicted data indicates a rewrite state, and stores data evicted from the first cache in the third memory when a rewrite flag corresponding to the evicted data indicates a non-rewrite state.Type: GrantFiled: July 12, 2022Date of Patent: November 21, 2023Assignee: FUJITSU LIMITEDInventors: Shiho Nakahara, Takahide Yoshikawa
-
Patent number: 11803471Abstract: An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.Type: GrantFiled: August 22, 2022Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Per H. Hammarlund, Lior Zimet, Sergio Kolor, Sagi Lahav, James Vash, Gaurav Garg, Tal Kuzi, Jeffry E. Gonion, Charles E. Tucker, Lital Levy-Rubin, Dany Davidov, Steven Fishwick, Nir Leshem, Mark Pilip, Gerard R. Williams, III, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan
-
Patent number: 11789655Abstract: A memory controller includes a command queue that receives and stores decoded memory commands and information related thereto including information indicating a type, a priority, an age, and a region of a memory system for a corresponding decoded memory command, and an arbiter coupled to the command queue and picks selected decoded memory commands among the decoded memory commands from the command queue for dispatch to the memory system by comparing the priority and the age for decoded memory commands having a first type. The arbiter detects when the command queue receives a decoded memory command of a second type opposite to said first type that accesses a first memory region of the memory system, and in response performs at least one pre-work action that reduces a latency of the decoded memory command of the second type.Type: GrantFiled: September 30, 2021Date of Patent: October 17, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Guanhao Shen, Ravindra Nath Bhargava
-
Patent number: 11782640Abstract: A memory controller includes a command queue that receives and stores decoded memory commands and information related thereto including information indicating a type, a priority, an age, and a region of a memory system for a corresponding decoded memory command, and an arbiter coupled to the command queue and picks selected decoded memory commands among the decoded memory commands from the command queue for dispatch to the memory system by comparing the priority and the age for decoded memory commands having a first type. The arbiter detects when the command queue receives a decoded memory command of a second type opposite to said first type that accesses a first memory region of the memory system, and in response elevates at least one of the priority and the age of a decoded command of the first type that accesses the first memory region already stored in the command queue.Type: GrantFiled: March 31, 2021Date of Patent: October 10, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Guanhao Shen, Ravindra Nath Bhargava
-
Patent number: 11768605Abstract: Handling I/O operations between a storage system and a host includes initiating a direct data transfer for each of the I/O operations that initially excludes other processes from using a CPU of the host, setting a first timer for each of the direct data transfers, converting at least some of the direct transfers to semi-synchronous I/O operations that release the CPU for use by other processes and transfer data directly between the storage system and the host in response to the first timer expiring prior to completion of a corresponding one of the direct data transfers, and setting a second timer that corresponds to an expected completion of the semi-synchronous I/O operation. The direct data transfers may exchange data between the host and cache memory of the storage system. The direct data transfers may be performed using a high speed connection between the storage system and the host.Type: GrantFiled: April 20, 2021Date of Patent: September 26, 2023Assignee: EMC IP Holding Company LLCInventors: Douglas E. LeCrone, Paul A. Linstead
-
Patent number: 11755246Abstract: A data processor includes a staging buffer, a command queue, a picker, and an arbiter. The staging buffer receives and stores first memory access requests. The command queue stores second memory access requests, each indicating one of a plurality of ranks of a memory system. The picker picks among the first memory access requests in the staging buffer and provides selected ones of the first memory access requests to the command queue. The arbiter selects among the second memory access requests from the command queue based on at least a preference for accesses to a current rank of the memory system. The picker picks accesses to the current rank among the first memory access requests of the staging buffer and provides the selected ones of the first memory access requests to the command queue.Type: GrantFiled: June 24, 2021Date of Patent: September 12, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Guanhao Shen, Ravindra Nath Bhargava
-
Patent number: 11755219Abstract: A method, computer system, and a computer program product for block prediction are provided. A computer receives a first retrieval request for retrieving data from storage blocks. The computer performs a cosine similarity comparison of the first retrieval request compared to prior data retrievals. The computer selects a matching data retrieval of the prior data retrievals. The matching data retrieval has a closest match to the first retrieval request based on the cosine similarity comparison. The computer identifies another storage block from the matching data retrieval as a predicted block for the first retrieval request. The computer transmits a prefetch request to prefetch data from the predicted block.Type: GrantFiled: May 26, 2022Date of Patent: September 12, 2023Assignee: International Business Machines CorporationInventors: Ramakrishna Vadla, Ranjith Rajagopalan Nair, Amey Gokhale, Archana Chinnaiah, Shubham Darokar
-
Patent number: 11741009Abstract: A cache may include multiple request handling pipes, each of which may further include multiple request buffers, for storing device requests from one or more processors to one or more devices. Some of the device requests may require to be sent to the devices according to an order. For a given one of such device requests, the cache may select a request handling pipe, based on an address indicated by the device request, and select a request buffer, based on the available entries of the request buffers of the selected request handling pipe, to store the device request. The cache may further use a first-level and a second-level token stores to track and maintain the device requests in order when transmitting the device requests to the devices.Type: GrantFiled: November 15, 2021Date of Patent: August 29, 2023Assignee: Apple Inc.Inventors: Sandeep Gupta, Brian P Lilly, Krishna C Potnuru
-
Patent number: 11742004Abstract: A method of operating a memory comprising a plurality of memory planes is disclosed. Each memory plane includes at least one corresponding memory array. The method includes, for each memory plane of the plurality of memory planes, generating (i) a corresponding plane ready (PRDY) signal indicating a busy or a ready state of the corresponding memory plane, and (ii) a corresponding plane array ready (PARDY) signal indicating a busy or a ready state of the corresponding memory array of the corresponding memory plane, such that a plurality of PRDY signals and a plurality of PARDY signals are generated corresponding to the plurality of memory planes. Execution of a memory command for a memory plane of the plurality of memory planes is selectively allowed or denied, based on status of one or more of the plurality of PRDY signals and the plurality of PARDY signals.Type: GrantFiled: November 24, 2021Date of Patent: August 29, 2023Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventors: Shuo-Nan Hung, Nai-Ping Kuo, Chien-Hsin Liu
-
Patent number: 11726713Abstract: Storage devices are often configured to receive and process commands from a host-computing device. These commands can vary in size and priority with larger sizes of command data being processed by storage devices more frequently. As these sizes increase, more situations occur when newly received high priority commands are received and ready for processing, but must wait for the current data associated with a normal priority command to be fetched and/or processed. Traditionally, the high priority command must wait, no matter how long, until the currently underway normal priority command is fetched and/or completed. However, methods and system described herein allow for the interruption of normal priority data fetching prior to completion. In this way, lower latencies may be achieved as high priority commands are not required to wait for processing. The previously fetched data may be dumped and re-fetched again or may be stored until normal operations can resume.Type: GrantFiled: June 25, 2021Date of Patent: August 15, 2023Assignee: Western Digital Technologies, Inc.Inventors: Srinivasa Rao Paidi, Kapil Sundrani
-
Patent number: 11726867Abstract: A variety of applications can include use of parity groups in a memory system with the parity groups arranged for data protection of the memory system. Each parity group can be structured with multiple data pages in which to write data and a parity page in which to write parity data generated from the data written in the multiple data pages. Each data page of a parity group can have storage capacity to include metadata of data written to the data page. Information can be added to the metadata of a data page with the information identifying an asynchronous power loss status of data pages that precede the data page in an order of writing data to the data pages of the parity group. The information can be used in re-construction of data in the parity group following an uncorrectable error correction code error in writing to the parity group.Type: GrantFiled: May 11, 2022Date of Patent: August 15, 2023Assignee: Micron Technology, Inc.Inventors: Harish Reddy Singidi, Kishore Kumar Muchherla, Xiangang Luo, Vamsi Pavan Rayaprolu, Ashutosh Malshe
-
Patent number: 11714754Abstract: An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.Type: GrantFiled: August 30, 2021Date of Patent: August 1, 2023Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria
-
Patent number: 11681440Abstract: The present disclosure includes apparatuses and methods for parallel writing to multiple memory device locations. An example apparatus comprises a memory device. The memory device includes an array of memory cells and sensing circuitry coupled to the array. The sensing circuitry includes a sense amplifier and a compute component configured to implement logical operations. A memory controller in the memory device is configured to receive a block of resolved instructions and/or constant data from the host. The memory controller is configured to write the resolved instructions and/or constant data in parallel to a plurality of locations the memory device.Type: GrantFiled: March 8, 2021Date of Patent: June 20, 2023Assignee: Micron Technology, Inc.Inventors: Jason T. Zawodny, Glen E. Hush, Troy A. Manning, Timothy P. Finkbeiner