Patents Examined by Ilwoo Park
-
Patent number: 12210748Abstract: Systems and methods are provided for providing a consistent experience for users of cloud-based block storage volumes. While cloud storage aims to remove hardware considerations for an end user's experience, block storage performance can nevertheless vary according to the underlying hardware used to support a volume or the specific network location of that hardware. Embodiments of the present disclosure address that inconsistent performance by associating a volume with a performance profile that sets a target latency for the volume. A storage client can then monitor observed latency for the volume and inject synthetic latency into input/output operations for the volume as calculated via a proportional-integral-derivative algorithm, such that the observed latency matches the target within the performance profile. This enables the cloud provider to vary physical hardware or network configurations without effect on block storage performance from the point of view of an end user.Type: GrantFiled: December 16, 2022Date of Patent: January 28, 2025Assignee: Amazon Technologies, Inc.Inventors: Mark Robinson, Valentin-Gabriel Priescu, Farhan Tanvir Ali, Marc Stephen Olson
-
Patent number: 12204750Abstract: The present disclosure describes techniques of metadata management for transparent block level compression. A first area may be created in a backend solid state drive. The first area may comprise a plurality of entries. The plurality of entries may be indexed by addresses of a plurality of blocks of uncompressed data. Each of the plurality of entries comprises a first part configured to store metadata and a second part configured to store compressed data. Each of the plurality blocks of uncompressed data may be compressed individually to generate a plurality of compressed blocks. Metadata and at least a portion of compressed data associated with each of the plurality of compressed blocks may be stored in one of the plurality of entries based on an address of a corresponding block of uncompressed data. A second area may be created in the backend solid state drive for storing the rest of the compressed data.Type: GrantFiled: September 26, 2022Date of Patent: January 21, 2025Assignee: Lemon Inc.Inventors: Ping Zhou, Chaohong Hu, Kan Frankie Fan, Fei Liu, Longxiao Li, Hui Zhang
-
Patent number: 12204785Abstract: There is provided a data processing apparatus in which decode circuitry receives a memory copy instruction containing an indication of a source area of memory, an indication of a destination area of memory, and an indication of a remaining copy length. In response to receiving the memory copy instruction, the decode circuitry generates at least one active memory copy operation or a null memory copy operation. The active memory copy operation causes one or more execution units to perform a memory copy from part of the source area of memory to part of the destination area of memory and the null memory copy operation leaves the destination area of memory unmodified.Type: GrantFiled: July 22, 2022Date of Patent: January 21, 2025Assignee: Arm LimitedInventors: Yasuo Ishii, Steven Daniel Maclean, Nicholas Andrew Plante, Muhammad Umar Farooq, Michael Brian Schinzler, Nicholas Todd Humphries, Glen Andrew Harris
-
Patent number: 12197375Abstract: One or more aspects of the present disclosure relate to establishing and using a hybrid synchronous/asynchronous communication layer for input/output (IO) messages to a storage array. In embodiments, an input/output (IO) message can be modified into first and second IO portions. In addition, a network communications layer can be established to include synchronous and asynchronous channels. Further, the first IO portion can be transmitted over the synchronous channel, and the second IO portion can be transmitted over the asynchronous channel.Type: GrantFiled: February 2, 2023Date of Patent: January 14, 2025Assignee: Dell Products L.P.Inventors: Paul A. Linstead, Doug E. Lecrone
-
Patent number: 12197352Abstract: An operating method of an electronic device which includes a processor and a memory, the method including: accessing, using the processor, the memory without control of an external host device in a first bias mode; sending, from the processor, information of the memory to the external host device when the first bias mode ends; and accessing, using the processor, the memory under control of the external host device in a second bias mode.Type: GrantFiled: July 19, 2022Date of Patent: January 14, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Insoon Jo
-
Patent number: 12197349Abstract: Apparatuses and methods for providing and interpreting command packets for direct control of memory channels are disclosed herein. An example apparatus includes flash memories configured into channels and a controller coupled to the flash memories. The controller receives packets and interpret the packets based at least on a first protocol, and determines whether any packets are linked based on a link identifier included in a block of each packet. The controller arranges the subset of packets based on an index included in the block of each packet of the subset of packets, and the subset of packets are arranged in order based on the respective indexes. A target flash memory and a target channel are determined by the controller based on flash memory and channel identifiers included in the block of each of the packet of the subset of packets.Type: GrantFiled: October 25, 2021Date of Patent: January 14, 2025Assignee: Micron Technology, Inc.Inventor: Jeffrey McVay
-
Patent number: 12182201Abstract: A graph data storage method for non-uniform memory access architecture (NUMA) processing system is provided. The processing system includes at least one computing device, each computing device corresponding to multiple memories, and each memory corresponding to multiple processors. The method includes: performing three-level partitioning on graph data to obtain multiple third-level partitions based on a communication mode among computing device(s), memories, and processors; and separately storing graph data of the multiple third-level partitions in NUMA nodes corresponding to the processors. A graph data storage system and an electronic device are further provided.Type: GrantFiled: January 27, 2021Date of Patent: December 31, 2024Assignee: ZHEJIANG TMALL TECHNOLOGY CO., LTD.Inventors: Wenfei Fan, Wenyuan Yu, Jingbo Xu, Xiaojian Luo
-
Patent number: 12159059Abstract: Methods, systems, and devices for commands to support adaptive memory systems are described. A memory system may be configured to receive a command to perform an operation on an address of a memory system, the command including an indication of a count of program/erase cycles associated with the address; determine whether the count of program/erase cycles associated with the address satisfies a threshold; adjust a trim parameter for operating the memory system based at least in part on determining that the indication of the count of program/erase cycles satisfies the threshold; and perform the operation associated with the command using the adjusted trim parameter.Type: GrantFiled: August 10, 2022Date of Patent: December 3, 2024Assignee: Micron Technology, Inc.Inventor: Deping He
-
Patent number: 12147364Abstract: An electronic system includes an auxiliary processor. The auxiliary processor includes a remapping device which receives data through a direct memory access (DMA, a register unit which stores the data, and processing logic which transmits operating status information to the remapping device. The remapping device remaps position information in which the data is stored in the register unit on the basis of the operating status information.Type: GrantFiled: May 19, 2022Date of Patent: November 19, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jin Woo Hwang, Hoon Sung Lee
-
Patent number: 12141333Abstract: In at least some embodiments, a system comprises a processor and a direct memory access (DMA) subsystem coupled to the processor. The system further comprises a component coupled to the DMA subsystem via an interconnect employing security rules, wherein, if the component requests a DMA channel, the DMA subsystem restricts usage of the DMA channel based on the security rules.Type: GrantFiled: August 21, 2017Date of Patent: November 12, 2024Assignee: Texas Instruments IncorporatedInventor: Gregory R. Conti
-
Patent number: 12141085Abstract: A transmitter includes a pull-down circuit coupled between an output of the transmitter and a first rail, a first pull-up circuit coupled between a second rail and the output of the transmitter, and a second pull-up circuit coupled between the second rail and the output of the transmitter. The transmitter also includes a control circuit coupled to a control input of the first pull-up circuit and a control input of the second pull-up circuit. The control circuit is configured to output a first control signal to the control input of the first pull-up circuit, wherein the first control signal controls a drive strength of the first pull-up circuit. The control circuit is also configured to output a second control signal to the control input of the second pull-up circuit, wherein the second control signal controls a drive strength of the second pull-up circuit.Type: GrantFiled: December 14, 2022Date of Patent: November 12, 2024Assignee: QUALCOMM INCORPORATEDInventors: Changkyo Lee, Ashwin Sethuram
-
Patent number: 12135658Abstract: A bus architecture is disclosed that provides for transaction queue reallocation on the modules communicating using the bus. A module can implement a transaction request queue by virtue of digital electronic circuitry, e.g., hardware or software or a combination of both. Some bus clogging issues that affect conventional systems can be circumvented by combining an out of order system bus protocol that uses a transaction request replay mechanism. Modules can evict less urgent transactions from transaction request queues to make room to insert more urgent transactions. Master modules can dynamically update a quality of service (QoS) value for a transaction while the transaction is still pending.Type: GrantFiled: December 14, 2021Date of Patent: November 5, 2024Assignee: ATMEL CORPORATIONInventors: Franck Lunadier, Vincent Debout
-
Patent number: 12135575Abstract: An AFSM core includes a destination state-cell generating a destination state-signal, and a source state-cell generating a source state-signal and causing transition of the source state-signal in response to an acknowledgement indicating transition of the destination state-signal. The acknowledgment is communicated through a delay. A state-overlap occurs between transition of the destination state-signal and transition of the source state-signal. An output-net includes a balanced logic-tree receiving inputs, including the destination state-signal, from the core, and an additional logic-tree cascaded with the balanced logic-tree to form an unbalanced logic-tree so an input to the additional logic-tree is provided by output from the balanced logic-tree and another input receives the source state-signal. Tree propagation time occurs between receipt of a transition in the destination state-signal by the balanced logic-tree and a resulting transition of the output from the balanced logic-tree.Type: GrantFiled: November 28, 2022Date of Patent: November 5, 2024Assignee: STMicroelectronics International N.V.Inventor: Roberta Priolo
-
Patent number: 12135665Abstract: A device for a vehicle may include a first wireline interface configured to receive a first data stream from a first sensor having a first sensor type for perceiving a surrounding of the vehicle, the first data stream including raw sensor data detected by the first sensor; a second wireline interface configured to receive a second data stream from a second sensor having a second sensor type for perceiving the surrounding of the vehicle, the second data stream including raw sensor data detected by the second sensor; one or more processors configured to generate a coded packet including the received first data stream and the received second data stream by employing vector packet coding on the first data stream and the second data stream; and an output wireline interface configured to transmit the generated coded packet to one or more target units of the vehicle.Type: GrantFiled: December 21, 2020Date of Patent: November 5, 2024Assignee: Intel CorporationInventors: Hassnaa Moustafa, Rony Ferzli, Rita Chattopadhyay
-
Patent number: 12131031Abstract: Systems and methods for automated tuning of Quality of Service (QoS) settings of volumes in a distributed storage system are provided. According to one embodiment, one or more characteristics of a workload of a client to which a storage node of multiple storage nodes of the distributed storage system is exposed are monitored. After a determination has been made that a characteristic meets or exceeds a threshold, (i) information regarding multiple QoS settings assigned to a volume of the storage node utilized by the client is obtained, (ii) a new value of a burst IOPS setting of the multiple QoS settings is calculated by increasing a current value of the burst IOPS setting by a factor dependent upon a first and a second QoS setting of the multiple QoS settings, and (iii) the new value of the burst IOPS setting is assigned to the volume for the client.Type: GrantFiled: July 3, 2023Date of Patent: October 29, 2024Assignee: NetApp, Inc.Inventors: Austino Longo, Tyler W. Cady
-
Patent number: 12124401Abstract: A data communication apparatus comprises a line driver configured to couple the data communication apparatus to a 1-wire serial bus; and a controller configured to: transmit a plurality of synchronization pulses over the 1-wire serial bus after a sequence start condition (SSC) has been transmitted over the 1-wire serial bus, the plurality of synchronization pulses being configured to synchronize one or more receiving devices coupled to the 1-wire serial bus to an untransmitted transmit clock signal; initiate an interrupt handling procedure when the plurality of synchronization pulses is encoded with a first value; and initiate a read transaction or a write transaction with at least one of the one or more receiving devices coupled to the 1-wire serial bus when the plurality of synchronization pulses is encoded with a second value.Type: GrantFiled: January 17, 2023Date of Patent: October 22, 2024Assignee: QUALCOMM IncorporatedInventors: Lalan Jee Mishra, Umesh Srikantiah, Francesco Gatta, Richard Dominic Wietfeldt
-
Patent number: 12111764Abstract: Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be coupled to a processor, which includes a memory controller. The memory controller may determine whether targeting of first data and second data by the processor to perform an operation results in processor-side cache misses. When targeting of the first data and the second data result in processor-side cache misses, the memory controller may determine a single memory access request that requests return of both the first data and the second data and instruct the processor to output the single memory access request to a memory system via one or more data buses coupled between the processor and the memory system to enable processing circuitry implemented in the processor to perform the operation based at least in part on the first data and the second data when returned from the memory system.Type: GrantFiled: February 15, 2022Date of Patent: October 8, 2024Assignee: Micron Technology, Inc.Inventor: Harold Robert George Trout
-
Patent number: 12105575Abstract: Example implementations relate to executing a workload in a computing system including processing devices, memory devices, and a circuit switch. An example includes identifying first and second instruction-level portions to be consecutively executed by the computing system; determining a first subset of processing devices and a first subset of memory devices to be used to execute the first instruction-level portion; controlling the circuit switch to interconnect the first subset of processing devices and the first subset of memory devices during execution of the first instruction-level portion; determining a second subset of the processing devices and a second subset of the memory devices to be used to execute the second instruction-level portion; and controlling the circuit switch to interconnect the second subset of processing devices and the second subset of memory devices during execution of the second instruction-level portion.Type: GrantFiled: October 19, 2022Date of Patent: October 1, 2024Assignee: Hewlett Packard Enterprise Development LPInventor: Terrel Morris
-
Patent number: 12099453Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.Type: GrantFiled: March 30, 2022Date of Patent: September 24, 2024Assignee: NVIDIA CorporationInventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Patent number: 12079154Abstract: A storage engine has a pair of compute nodes, each compute node having a separate PCIe root complex and attached memory. The PCIe root complexes are interconnected by multiple Non-Transparent Bridge (NTB) links. The NTB resources are unequally shared, such that host IO devices are required to use a first subset of the NTB links to implement memory access operations on the memory of the peer compute node, whereas storage software memory access operations are able to be implemented on all of the NTB links. A NTB link arbitration system arbitrates usage of the first and second subsets of NTB links by the storage software, to distribute subsets of the storage software memory access operations on peer memory to the first and second subsets of NTB links, while causing all host IO device memory access operations on peer memory to be implemented on the first set of NTB links.Type: GrantFiled: January 10, 2023Date of Patent: September 3, 2024Assignee: Dell Products, L.P.Inventors: Jonathan Krasner, Ro Monserrat, Jerome Cartmell, Thomas Mackintosh