Patents Examined by Masud K Khan
-
Patent number: 11836053Abstract: Example implementations relate to metadata operations in a storage system. An example storage system includes a machine-readable storage storing instructions executable by a processor to determine to generate a synthetic full backup based on data stream representations of a plurality of data streams. The instructions are also executable to, in response to a determination to generate the synthetic full backup, create a logical group including the data stream representations. The instructions are also executable to specify a cache resource allocation for the logical group, and generate the synthetic full backup from data stream representations using an amount of a cache resource limited by the cache resource allocation for the logical group.Type: GrantFiled: September 27, 2021Date of Patent: December 5, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: David Malcolm Falkinder, Richard Phillip Mayo, Peter Thomas Camble
-
Patent number: 11836091Abstract: A processor supports secure memory access in a virtualized computing environment by employing requestor identifiers at bus devices (such as a graphics processing unit) to identify the virtual machine associated with each memory access request. The virtualized computing environment uses the requestor identifiers to control access to different regions of system memory, ensuring that each VM accesses only those regions of memory that the VM is allowed to access. The virtualized computing environment thereby supports efficient memory access by the bus devices while ensuring that the different regions of memory are protected from unauthorized access.Type: GrantFiled: October 31, 2018Date of Patent: December 5, 2023Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULCInventors: Anthony Asaro, Jeffrey G. Cheng, Anirudh R. Acharya
-
Patent number: 11836077Abstract: Methods, systems, and devices for dynamically tuning host performance booster thresholds are described. A memory system may include a set of memory devices and an interface configured to communicate commands with a host system coupled with the memory system. The interface may communicate commands to the memory system according to a first command mode associated with a logical address space including a plurality of regions and communicate commands according to a second command mode associated with physical memory address. The memory system may further include a controller that may determine a region activated for the second command mode, receive a first plurality of commands, determine, upon deactivating the region, a first threshold based on a first quantity of read commands serviced according to the second command mode. The controller may activate the region for the second command based on a second quantity of read commands received exceeding the first threshold.Type: GrantFiled: September 1, 2020Date of Patent: December 5, 2023Assignee: Micron Technology, Inc.Inventor: Yanhua Bi
-
Patent number: 11822479Abstract: Techniques for performing cache operations are provided. The techniques include recording an indication that providing exclusive access of a first cache line to a first processor is deemed problematic; detecting speculative execution of a store instruction by the first processor to the first cache line; and in response to the detecting, refusing to provide exclusive access of the first cache line to the first processor, based on the indication.Type: GrantFiled: October 29, 2021Date of Patent: November 21, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Paul J. Moyer
-
Patent number: 11809317Abstract: A memory controlling device configured to connect to a memory module including a resistance switching memory cell array which is partitioned into a plurality of partitions including a first partition and a second partition is provided. A first controlling module accesses the memory module. A second controlling module determines whether there is a conflict for the first partition to which a read request targets when an incoming request is the read request, instructs the first controlling module to read target data of the read request from the memory module when a write to the second partition is in progress, and suspends the read request when a write to the first partition is in progress.Type: GrantFiled: February 24, 2022Date of Patent: November 7, 2023Assignees: MemRay Corporation, Yonsei University, University—Industry Foundation (UIF)Inventors: Myoungsoo Jung, Gyuyoung Park, Miryeong Kwon
-
Patent number: 11809299Abstract: An information handling system includes a storage system and a remote processing system. The storage system includes a storage array and a local storage usage predictor. The local storage usage predictor receives usage information from the storage array, and predicts a first usage prediction for the storage array based upon the usage information. The remote processing system includes a remote storage usage predictor remote from the storage system. The remote storage usage predictor receives the usage information and to predicts a second usage prediction for the storage array based upon the usage information.Type: GrantFiled: May 26, 2021Date of Patent: November 7, 2023Assignee: Dell Products L.P.Inventors: Cherry Changyue Dai, Arthur Fangbin Zhou
-
Patent number: 11803473Abstract: Systems and techniques for dynamic selection of policy that determines whether copies of shared cache lines in a processor core complex are to be stored and maintained in a level 3 (L3) cache of the processor core complex are based on one or more cache line sharing parameters or based on a counter that tracks L3 cache misses and cache-to-cache (C2C) transfers in the processor core complex, according to various embodiments. Shared cache lines are shared between processor cores or between threads. By comparing either the cache line sharing parameters or the counter to corresponding thresholds, a policy is set which defines whether copies of shared cache lines at such indices are to be retained in the L3 cache.Type: GrantFiled: November 8, 2021Date of Patent: October 31, 2023Assignee: Advanced Micro Devices, Inc.Inventors: John Kelley, Paul Moyer
-
Patent number: 11783226Abstract: Systems, computer-implemented methods, and computer program products to facilitate model transfer learning across evolving processes are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a condition definition component that defines one or more conditions associated with use of a model trained on first traces of a first process to make a prediction on one or more second traces of a second process. The computer executable components can further comprise a guardrail component that determines whether to use the model to make the prediction.Type: GrantFiled: June 25, 2020Date of Patent: October 10, 2023Assignee: International Business Machines CorporationInventors: Evelyn Duesterwald, Vatche Isahagian, Vinod Muthusamy
-
Patent number: 11782834Abstract: In a network-on-chip (NoC) interconnect connected to one or more agents with multiple input ports, one or more switches are provided with a round robin arbiter constructed to use representations of the input ports and, in some embodiments, the current round robin state, as thermometer codes. By using thermometer code to represent port information, the correspondence to the current input and the current state to be granted can be rapidly determined through a simple two-step AND and XOR operations. With such a simple logical procedure, the number of steps to make the determination, and therefore the energy required, can be reduced by log 2(n) steps or up to 43%. Using thermometer code reduces the number of computations required. Hence, the number of logic circuit elements required to carry out the calculation is reduced, shrinking the floorplan area needed for the arbiter.Type: GrantFiled: March 11, 2022Date of Patent: October 10, 2023Inventor: Boon Chuan
-
Patent number: 11782835Abstract: Disclosed herein is a heterogeneous system based on unified virtual memory. The heterogeneous system based on unified virtual memory may include a host for compiling a kernel program, which is source code of a user application, in a binary form and delivering the compiled kernel program to a heterogenous system architecture device, the heterogenous system architecture device for processing operation of the kernel program delivered from the host in parallel using two or more different types of processing elements, and unified virtual memory shared between the host and the heterogenous system architecture device.Type: GrantFiled: November 29, 2021Date of Patent: October 10, 2023Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Joo-Hyun Lee, Young-Su Kwon, Jin-Ho Han
-
Patent number: 11782629Abstract: A data processing method includes in response to determining that data corresponding to received data to be written exists, determining a first data identifier of the data corresponding to the data to be written, where the first data identifier is used to obtain a first storage area corresponding to the data, generating a second data identifier of the data to be written, writing the data to be written into the second storage area, in response to receiving a data rollback instruction, obtaining a target data identifier corresponding to the data rollback instruction, and determining a target storage area based on the target data identifier to obtain rollback data from the target storage area. The second data identifier is different from the first data identifier, and the second data identifier corresponds to a second storage area different from the first storage area.Type: GrantFiled: September 14, 2021Date of Patent: October 10, 2023Assignee: LENOVO (BEIJING) LIMITEDInventors: Xianwu Sun, Quan Wang, Shengyu Zhang
-
Patent number: 11775186Abstract: Dynamic configuration of storage volumes based on data usage metrics is provided. Data may be initially written to a relatively low-throughput storage volume that is managed according to a usage metric. For example, a burst balance metric may be monitored and, if it falls below a threshold or reduces at a rate exceeding a threshold, the system can dynamically change to writing data to a higher-throughput data storage volume. After a period of time and/or if performance criteria are satisfied, the system can dynamically change to writing data to a lower-throughput data storage volume.Type: GrantFiled: August 4, 2021Date of Patent: October 3, 2023Assignee: Amazon Technologies, Inc.Inventors: Ophir Setter, Yoram Cohen, Sigal Weiner
-
Patent number: 11775211Abstract: The present technology relates to an electronic device. A memory controller according to the present technology may include a host interface controller, a plurality of buffers, and a memory operation controller. The host interface controller may sequentially generate a plurality of commands based on a request received from a host. The plurality of buffers may store the plurality of commands according to command attributes. The memory operation controller may compare a sequence number of a target command stored in a target buffer among the plurality of buffers with a sequence number of a standby command stored in remaining buffers, and may determine a process of the target command and a process of the standby command based on a comparison. wherein a buffer satisfying a flush condition among the plurality of buffers is selected as the target buffer.Type: GrantFiled: February 22, 2021Date of Patent: October 3, 2023Assignee: SK hynix Inc.Inventor: Hye Mi Kang
-
Patent number: 11768779Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.Type: GrantFiled: December 16, 2019Date of Patent: September 26, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Jieming Yin, Yasuko Eckert, Subhash Sethumurugan
-
Patent number: 11768635Abstract: Scaling storage resources in a storage volume, including: monitoring a usage of a volume in a storage pool that includes one or more cloud-based storage systems; determining that the usage of the volume exceeds a threshold usage; and based on the determination, expanding the resources that are included in the storage pool for servicing the volume, including: instantiating one or more new virtual drives that are included in the one or more cloud-based storage systems; and adding the one or more new virtual drives to the storage pool.Type: GrantFiled: April 25, 2022Date of Patent: September 26, 2023Assignee: PURE STORAGE, INC.Inventors: Taher Vohra, Par Botes, Naveen Neelakantam, Ivan Jibaja
-
Patent number: 11755494Abstract: Techniques for performing cache operations are provided. The techniques include for a memory access class, detecting a threshold number of instances in which cache lines in an exclusive state in a cache are changed to an invalid state or a shared state without being in a modified state; in response to the detecting, treating first coherence state agnostic requests for cache lines for the memory access class as requests for cache lines in a shared state; detecting a reset event for the memory access class; and in response to detecting the reset event, treating second coherence state agnostic requests for cache lines for the memory class as coherence state agnostic requests.Type: GrantFiled: October 29, 2021Date of Patent: September 12, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Paul J. Moyer
-
Patent number: 11747987Abstract: An electronic device includes a data storage device and a host device. The host device is coupled to the data storage device via a predetermined interface and includes a processor. The processor dynamically adjusts a data transfer speed of the predetermined interface according to a data processing speed required by data to be read from or written to the data storage device.Type: GrantFiled: January 11, 2018Date of Patent: September 5, 2023Assignee: Silicon Motion, Inc.Inventors: Fu-Jen Shih, Chia-Ching Huang
-
Patent number: 11748018Abstract: Embodiments include systems and methods for mass data optimization. Embodiments include receiving user data from a user server which is being continuously collected locally by the user server and storing the user data on a storage device of a storage module, deduplicating the user data on the storage device, performed by the storage module, compressing the user data on the storage device, performed by the storage module, transparently intercepting the user data by a data intercept module, rerouting the transparently intercepted data to a data communication optimization module for optimization and intelligent routing, optimizing communication by the data communication optimization module so that the intercepted user data is configured differently for data communication to a remote centralized datacenter or server, and transmitting the differently configured data to a centralized datacenter or server.Type: GrantFiled: September 30, 2021Date of Patent: September 5, 2023Inventor: Roux Visser
-
Patent number: 11748266Abstract: Embodiments are for special tracking pool enhancement for core L1 address invalidates. An invalidate request is designated to fill an entry in a queue in a local cache of a processor core, the queue including a first allocation associated with processing any type of invalidate request and a second allocation associated with processing an invalidate request not requiring a response in order for a controller to be made available, the entry being in the second allocation. Responsive to designating the invalidate request to fill the entry in the queue in the local cache, a state of the controller that made the invalidate request is changed to available based at least in part on the entry being in the second allocation.Type: GrantFiled: March 4, 2022Date of Patent: September 5, 2023Assignee: International Business Machines CorporationInventors: Deanna Postles Dunn Berger, Gregory William Alexander, Richard Joseph Branciforte, Aaron Tsai, Markus Kaltenbach
-
Patent number: 11733903Abstract: Data units can be relocated in scale-out storage systems. For example, a computing device can receive, at a first node of a scale-out storage system, a request for a data unit. The first node can include a metadata entry associated with the data unit. The computing device can determine, based on the metadata entry, that a second node of the scale-out storage system includes the data unit. The computing device can determine, from the metadata entry, that a number of versions of the data unit in the scale-out storage system meets or exceeds a threshold. The computing device can output a command to cause the data unit to be relocated to the first node with the metadata entry.Type: GrantFiled: June 8, 2021Date of Patent: August 22, 2023Assignee: RED HAT, INC.Inventors: Joshua Durgin, Gabriel Zvi BenHanokh