Patents Examined by Yong J. Choe
-
Patent number: 12293092Abstract: A method and apparatus of managing memory includes storing a first memory page at a shared memory location in response to the first memory page including data shared between a first virtual machine and a second virtual machine. A second memory page is stored at a memory location unique to the first virtual machine in response to the second memory page including data unique to the first virtual machine. The first memory page is accessed by the first virtual machine and the second virtual machine, and the second memory page is accessed by the first virtual machine and not the second virtual machine.Type: GrantFiled: December 16, 2022Date of Patent: May 6, 2025Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Lu Lu, Anthony Asaro, Yinan Jiang
-
Patent number: 12277334Abstract: A data storage device includes storage media and control circuitry and is configured to enable the creation of partitions with different performance levels. The storage media includes a first set and a second set of memory blocks having different performance levels. The control circuitry is configured to: in response to a request from a host system, provide performance data from the first set of memory blocks and the second set of memory blocks to the host system. The control circuitry is further configured to: receive partition settings from the host system, the partition settings creating a first partition including at least part of the first set of memory blocks and a second partition including at least part of the second set of memory blocks, wherein the first partition has a better performance level than the second partition; and save the partition settings to the storage media.Type: GrantFiled: August 11, 2023Date of Patent: April 15, 2025Assignee: Sandisk Technologies, Inc.Inventors: Nitin Jain, Ronak Jain, Matthew Klapman, Ramanathan Muthiah, Taninder Singh Sijher
-
Patent number: 12271273Abstract: An embodiment maps identifying information of a remote registry into a database within a local inventory at a local registry hub. An embodiment selects at least one remote registry from an index maintained in the local inventory in accordance with a policy received at a scheduler from an external client of the local registry hub. An embodiment selects a locally stored image in accordance with a policy received from an external client of the local registry hub. An embodiment uploads replicas of the selected image via one or more registry agents, each registry agent transmitting to its corresponding remote registry, transmitting constituent layers of the replica across multiple remote registries simultaneously such that a subset of the layers constituting the image are uploaded to each remote registry. An embodiment stores metadata for the uploaded image in a cache within a local metadata store.Type: GrantFiled: October 6, 2023Date of Patent: April 8, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Guangya Liu, Hai Hui Wang, Peng Li, Xiang Zhen Gan, Ying Mo
-
Patent number: 12271593Abstract: A memory device includes a plurality of memory cells. Each memory cell stores a plurality of signal levels representing a plurality of values corresponding to a respective plurality of bits, bits in corresponding respective positions of significance across the plurality of memory cells constituting respective memory pages of the memory device. The memory device also includes decoding circuitry to decode each bit value of one of the respective memory pages using bit values read from at least one other one of the respective memory pages, adjacent to the one of the respective memory pages. The plurality of signal levels may represent the plurality of values according to a Gray code. The decoding circuitry may be configured to compare each signal level to a set of voltage thresholds, and to decode a subset of the plurality of signal levels using fewer than all voltage thresholds in the set of voltage thresholds.Type: GrantFiled: April 28, 2023Date of Patent: April 8, 2025Assignee: Marvell Asia Pte LtdInventors: Nirmal V. Shende, Nedeljko Varnica, Mats Oberg
-
Patent number: 12265726Abstract: A user can select a capacity setting for a transitional partition that determines the allocation between a low-density partition and a high-density partition in the transitional partition. The transitional partition can dynamically change among multiple settings having different capacities for the low-density partition. If the current setting of the transitional partition does not efficiently utilize the available storage space based on the user's preferences for storing different types of data in the low-density partition and the high-density partition, then the user can choose to change the transitional partition to a different setting that better suits the individual user's storage allocation preferences. Therefore, valuable storage space will not be under-utilized but instead will be repurposed for more efficient use by converting a low-density partition to a high-density partition, and vice versa.Type: GrantFiled: November 15, 2022Date of Patent: April 1, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Mai Ghaly, Thomas Fahrig
-
Patent number: 12248692Abstract: Selective packing of small block write operations is implemented prior to compression, to improve compression efficiency and hence reduce bandwidth requirements of a Remote Data Replication (RDR) facility. Compression characteristics of write IO operations are forecast, and write IO operations with similar forecast compression characteristics are pooled. Write IO operations are also grouped according to extent, device, and storage group. Write operations from a given compression pool are then preferentially selected from the extent-level grouping, next from the device-level grouping, and then from the SG-level grouping, to create an IO package. The IO package is then compressed and transmitted on the RDR facility. By creating an IO package prior to compression, it is possible to achieve greater compression than would be possible if each individual write IO operation were to be individually compressed to thereby reduce network bandwidth of the RDR facility.Type: GrantFiled: November 6, 2023Date of Patent: March 11, 2025Assignee: Dell Products, L.P.Inventors: Sandeep Chandrashekhara, Mohammed Asher, Ramesh Doddaiah, Aamir Mohammed Vt
-
Patent number: 12242380Abstract: A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of regions. The plurality of processing regions can be interleaved between the plurality of regions of the first memory. The processing regions can include a plurality of compute cores. The second memory can be coupled to the plurality of processing regions. The control logic can configure data flow between compute cores of one or more of the processing regions and corresponding adjacent regions of the first memory. The control logic can also configure data flow between the second memory and the compute cores of one or more of the processing regions. The control logic can also configure data flow between compute cores within one or more respective ones of the processing regions.Type: GrantFiled: September 12, 2022Date of Patent: March 4, 2025Assignee: MemryX IncorporatedInventors: Mohammed Zidan, Jacob Botimer, Timothy Wesley, Chester Liu, Zhengya Zhang, Wei Lu
-
Patent number: 12242730Abstract: A data arrangement method based on file system, a memory storage device and a memory control circuit unit are disclosed. The method includes: analyzing a file system stored in a system region to obtain a plurality of first logical units to which a first file belongs and first distribution information of a plurality of first physical units in a storage region, wherein the first physical units are mapped by the first logical units; determining whether to activate a data arrangement operation on the first file according to the first distribution information; after the data arrangement operation on the first file is activated, reading first data belonging to the first file from the first physical units; and writing, sequentially, the read first data to at least one second physical unit in the storage region.Type: GrantFiled: March 24, 2023Date of Patent: March 4, 2025Assignee: Hefei Core Storage Electronic LimitedInventors: Chih-Ling Wang, Yin Ping Gao, Qi-Ao Zhu, Kuai Cao, Dong Sheng Rao
-
Patent number: 12242742Abstract: Embodiments of the disclosure provides a method, apparatus, device, and storage medium for storing data in a digital assistant. The method includes: receiving a configuration request for one or more types of data to be stored in the digital assistant; in response to the configuration request, obtaining configuration information of respective types of data among the one or more types of data via one or more entries on a first user interface, the one or more entries corresponding to the one or more types of data, wherein the one or more types of data are to be extracted and stored, based on the configuration information in an interaction between the digital assistant and a user, for a subsequent interaction between the digital assistant and the user; and creating the digital assistant based at least on the configuration information.Type: GrantFiled: April 11, 2024Date of Patent: March 4, 2025Assignee: Beijing Zitiao Network Technology Co., Ltd.Inventor: Ren Zhou
-
Patent number: 12235778Abstract: An artificial neural network memory system includes at least one processor configured to generate a data access request corresponding to an artificial neural network operation; and at least one artificial neural network memory controller configured to sequentially record the data access request to generate an artificial neural network data locality pattern of the artificial neural network operation and generate an advance data access request which predicts a next data access request of the data access request generated by the at least one processor based on the artificial neural network data locality pattern.Type: GrantFiled: December 3, 2020Date of Patent: February 25, 2025Assignee: DEEPX CO., LTD.Inventor: Lok Won Kim
-
Patent number: 12235734Abstract: A system can maintain first data as files and second data as objects. The system can receive a request from a remote computer to convert a file into an object. The system can receive, from a file storage protocol mount point associated with the remote computer, first metadata for the file. The system can receive, from an object storage protocol client associated with the remote computer, second metadata for an object storage bucket of the second data, wherein data of the file is to be stored as the object in the object storage bucket. The system can copy the data of the file into the object in the object storage bucket, based on the first metadata for the file, based on the second metadata for the object storage bucket, and independently of transferring the data of the file to the object storage bucket via the remote computer.Type: GrantFiled: July 21, 2023Date of Patent: February 25, 2025Assignee: DELL PRODUCTS L.P.Inventors: Narayan Behera, Deepak Ratnaparkhi, Sameer Mohod, Anurag Chandra
-
Patent number: 12229406Abstract: Methods, systems, and devices for speed bins to support memory compatibility are described. A host device may read a value of a register including serial presence detect data of a memory module. The serial presence detect data may be indicative of a timing constraint for operating the memory module at a first clock rate, where the timing constraint and the first clock rate may be associated with a first speed bin. The host device may select, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, where the host device may support operations according to a set of timing constraints that includes a set of values. The timing constraint may be selected from a subset of the set of timing constraints, where the subset may be exclusive of at least one of the set of values.Type: GrantFiled: December 20, 2023Date of Patent: February 18, 2025Assignee: Micron Technology, Inc.Inventors: Eric V. Pohlmann, Neal J. Koyle
-
Patent number: 12229404Abstract: A memory system includes a nonvolatile memory including a plurality of blocks as data erase units, a measuring unit which measures an erase time at which data of each block is erased, and a block controller which writes data supplied from at least an exterior into a first block which is set in a free state and whose erase time is oldest.Type: GrantFiled: December 7, 2023Date of Patent: February 18, 2025Assignee: Kioxia CorporationInventors: Kazuya Kitsunai, Shinichi Kanno, Hirokuni Yano, Toshikatsu Hida, Junji Yano
-
Patent number: 12229428Abstract: Providing persistent storage to transient cloud computing services, including: creating a cloud computing instance, wherein the cloud computing instance is created on on-premises cloud infrastructure; and storing, in non-volatile storage in a storage system that is communicatively coupled to the on-premises cloud infrastructure, data associated with the cloud computing instance.Type: GrantFiled: August 9, 2023Date of Patent: February 18, 2025Assignee: PURE STORAGE, INC.Inventors: Emily Potyraj, Joshua Robinson, Brian Carpenter
-
Patent number: 12223183Abstract: Optimizing copy operations in a storage array, includes combining, in dependence upon a metadata optimization policy, a plurality of copy operations into a single copy operation and splitting the single copy operation into an optimized set of executable copy operations that copy data based on memory alignment.Type: GrantFiled: December 4, 2023Date of Patent: February 11, 2025Assignee: PURE STORAGE, INC.Inventors: Christopher Golden, Scott Smith, Luke Paulsen, David Grunwald, Jianting Cao
-
Patent number: 12216923Abstract: The present application provides a computer system, a memory expansion device and a method for use in the computer system. The computer system includes multiple hosts and multiple memory expansion devices; the memory expansion devices correspond to the hosts in a one-to-one manner. Each host includes a CPU and a memory; each memory expansion device includes a first interface and multiple second interfaces. The first interface is configured to allow each memory expansion device to communicate with the corresponding CPU via a first coherence interconnection protocol, and the second interface is configured to allow each memory expansion device to communicate with a portion of memory expansion devices via a second coherence interconnection protocol. Any two memory expansion devices communicate with each other via at least two different paths, and the number of memory expansion devices that at least one of the two paths passes through is not more than one.Type: GrantFiled: December 12, 2022Date of Patent: February 4, 2025Assignee: ALIBABA (CHINA) CO., LTD.Inventors: Yijin Guan, Tianchan Guan, Dimin Niu, Hongzhong Zheng
-
Patent number: 12210773Abstract: A method of operating a storage device including first and second memory devices and a memory controller, which are connected to a single channel, the method including: transmitting first data output from the first memory device to the memory controller through a data signal line in the single channel; and transmitting a command to the second memory device through the data signal line while the memory controller receives the first data, wherein a voltage level of the data signal line is based on the command and the first data of the first memory device is loaded on the data signal line, and the first data and the command are transmitted in both directions of the data signal line.Type: GrantFiled: November 17, 2021Date of Patent: January 28, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Youngmin Jo, Tongsung Kim, Chiweon Yoon, Seonkyoo Lee, Byunghoon Jeong
-
Patent number: 12204751Abstract: In a memory system, reference voltage training per path provides the capability to train receiver and transmitter reference voltages to optimal values based on selected feedback per path from the memory device. Training receiver reference voltages to an optimal receiver reference voltage per path includes programming dedicated mode registers that enable a local receiver voltage reference adjuster circuit to adjust the receiver reference voltage per path to the optimal receiver reference voltage per path. Transmitter reference voltage training includes the capability to also train an optimal input timing delay for an optimal transmitter reference voltage. Reference voltage training can be performed by a host component and/or a test system having access to the selected feedback per path of the memory device undergoing training.Type: GrantFiled: June 25, 2021Date of Patent: January 21, 2025Assignee: Intel CorporationInventors: Arvind Kumar, Dean-Dexter R. Eugenio, John R. Goles, Santhosh Muskula
-
Patent number: 12204791Abstract: Systems and methods are described for using a Deep Reinforcement Learning (DRL) agent to automatically tune Quality of Service (QoS) settings of a distributed storage system (DSS). According to one embodiment, a DRL agent is trained in a simulated environment to select QoS settings (e.g., a value of one or more of a minimum IOPS parameter, a maximum IOPS parameter, and a burst IOPS parameter). The training may involve placing the DRL agent into every feasible state representing combinations of QoS settings, workload conditions, and system metrics for a period of time for multiple iterations, and rewarding the DRL agent for selecting QoS settings that minimize an objective function based on a selected measure of system load. The trained DRL agent may then be deployed to one or more DSSs to constantly update QoS settings so as to minimize the selected measure of system load.Type: GrantFiled: August 2, 2023Date of Patent: January 21, 2025Assignee: NetApp, Inc.Inventor: Tyler W. Cady
-
Patent number: 12204447Abstract: A memory processing unit (MPU) configuration method can include mapping operations of one or more neural network models to sets of cores in a plurality of processing regions. In addition, dataflow of the one or more neural network models can be mapped to the sets of cores in the plurality of processing regions. Furthermore, configuration information can be generated based on the mapping of the operations of the one or more neural network models to the set of cores in the plurality of processing regions and the mapping of dataflow of the one or more neural network models to the sets of cores in the plurality of processing regions. The method can be implemented by generating an initial graph from a neural network model. A mapping graph can then be generated from the final graph. One or more configuration files can then be generated from the mapping graph.Type: GrantFiled: September 12, 2022Date of Patent: January 21, 2025Assignee: MemryX IncorporatedInventors: Mohammed Zidan, Jacob Botimer, Timothy Wesley, Chester Liu, Wei Lu