Patents Examined by Sheng-Jen Tsai
-
Patent number: 10649667Abstract: A system and method for managing garbage collection in Solid State Drives (SSDs) in a Redundant Array of Independent Disks (RAID) configuration, using a RAID controller is described. A control logic can control read and write requests for the SSDs in the RAID configuration. A selection logic can select an SSD for garbage collection. Setup logic can instruct the selected SSD to enter a garbage collection setup phase. An execute logic can instruct the selected SSD to enter and exit the garbage collection execute phase.Type: GrantFiled: September 22, 2017Date of Patent: May 12, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Oscar Pinto, Sreenivas Krishnan
-
Patent number: 10642523Abstract: A method and apparatus for updating data in a memory for electrical compensation, the method comprises: when a master chip receives a power-off signal, writing a serial number of a block being updated or a predetermined value into a nonvolatile memory. In the apparatus, only a nonvolatile memory is required to be provided external to a master chip to store the serial number of the block (the sequence of the block) being updated currently during power-off. Upon a next power-on, it is determined that which rows have their data lost during the previous power-off according to the serial number of the block, and then data of adjacent rows is used to replace the data of the rows which have their data lost during the previous power-off; therefore, the operation is simple and the efficiency is high, so that the time for updating the data is short, without affecting the memory's lifespan.Type: GrantFiled: May 17, 2017Date of Patent: May 5, 2020Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Song Meng, Fei Yang, Danna Song
-
Patent number: 10628330Abstract: A method is described for enabling inter-process communication between a first application and a second application, the first application running within a first virtual machine (VM) in a host and the second application running within a second VM in the host, The method includes receiving a request to attach a shared region of memory to a memory allocation, identifying a list of one or more physical memory pages defining the shared region that corresponds to the handle, and mapping guest memory pages corresponding to the allocation to the physical memory pages. The request may be received by a framework from the second application and includes a handle that uniquely identifies the shared region of memory as well as an identification of at least one guest memory page corresponding to the memory allocation.Type: GrantFiled: April 6, 2018Date of Patent: April 21, 2020Assignee: VMware, Inc.Inventors: Gustav Seth Wibling, Jagannath Gopal Krishnan
-
Patent number: 10621095Abstract: Processing of prefetched data based on cache residency. Data to be used in future processing is prefetched. A block of data being prefetched is selected for processing, and a check is made as to whether the block of data is resident in a selected cache (e.g., L1 cache). If the block of data is resident in the selected cache, it is processed; otherwise, processing is bypassed until a later time when it is resident in the selected cache.Type: GrantFiled: July 20, 2016Date of Patent: April 14, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael K. Gschwind, Timothy J. Slegel
-
Patent number: 10621049Abstract: Disclosed are systems and methods for generating consistent backups. A central coordinator informs each node storing a partition of the time to perform a backup. At the designated time, each node blocks updates for a corresponding time interval measured by its local clock. Each node performs the backup operation according to its own local clock. Consistent backups may be generated in spite of clock skew between the local clocks as long as the time interval is at least as long as a maximum local clock skew (among the nodes performing the backup). In some systems the maximum local clock skew may be reduced, by a round-trip update latency from a client, for example.Type: GrantFiled: March 12, 2018Date of Patent: April 14, 2020Assignee: Amazon Technologies, Inc.Inventors: Tate Andrew Certain, Akshat Vig, Douglas Brian Terry
-
Patent number: 10613790Abstract: A technique for performing storage tiering accesses allocation metadata in a data storage system and applies that allocation metadata when relocating data from a selected extent to a target extent. The selected extent includes a range of contiguous blocks. The allocation metadata may designate each of these blocks as either allocated or free. When relocating data from the selected extent to the target extent, the technique copies data of the selected extent on a per-block basis, checking whether that block is allocated or free before copying it to the target extent.Type: GrantFiled: December 30, 2016Date of Patent: April 7, 2020Assignee: EMC IP Holding Company LLCInventors: Philippe Armangau, Feng Zhang, Xianlong Liu, Chang Yong Yu, Ruiling Dou
-
Patent number: 10579265Abstract: A digital asset management application stores a mixture of original and lossy assets in a user's local storage resource. This mixture is dynamically adjusted in response to usage patterns and availability of local storage. If local storage is limited, the number of original assets stored locally is reduced. If local storage resources are critically low, lossy assets are purged. A user's interactions with his/her digital assets are monitored, and assets that are perceived to be less important to a user are purged from local storage before assets that are perceived to be more important. An asset may be deemed to be “important” based on any number of relevant criteria, such as user selection, user rating, or user interaction. Coordinating asset allocation between local and cloud-based storage resources requires little or no active management on behalf of the user, thus transparently providing the user with the benefits of cloud storage.Type: GrantFiled: October 18, 2017Date of Patent: March 3, 2020Assignee: Adobe Inc.Inventors: Jeffrey S. Andrew, Harrison W. Liu, Edward C. Wright
-
Patent number: 10572183Abstract: A data processing system includes a memory and a data processor. The data processor is connected to the memory and adapted to access the memory in response to scheduled memory access requests. The data processor has power management logic that, in response to detecting a memory power state change, determines whether to retrain or suppress retraining of at least one parameter related to accessing the memory based on an operating state of the memory. The power management logic further determines a retraining interval for retraining the at least one parameter related to accessing the memory, and initiates a retraining operation in response to the memory power state change based on the operating state of the memory being outside of a predetermined threshold.Type: GrantFiled: October 18, 2017Date of Patent: February 25, 2020Assignee: Advanced Micro Devices, Inc.Inventors: Sonu Arora, Guhan Krishnan, Kevin Brandl
-
Patent number: 10552334Abstract: A method and system acquires cache line data associated with a load from respective hierarchical cache data storage components. As a part of the method and system, a store queue is accessed for one or more portions of a cache line associated with the load, and, if the one or more portions of the cache line is held in the store queue, the one or more portions of the cache line is stored in a load queue location associated with the load. The load is completed if the one or more portions of the cache line stored in the load queue location includes all portions of the cache line associated with the load.Type: GrantFiled: March 24, 2017Date of Patent: February 4, 2020Assignee: INTEL CORPORATIONInventors: Karthikeyan Avudaiyappan, Paul G. Chan
-
Patent number: 10481812Abstract: A storage device includes a connection unit to which a first external device is to be connected, a first non-volatile memory in which content items are stored with associated content IDs, a first controller configured to access the content items stored in the first non-volatile memory, an antenna, a second non-volatile memory in which permission information is stored, and a second controller configured to update the permission information based on update information received from a second external device through the antenna. The update information is contained in radio waves transmitted by the second external device and the radio waves cause the antenna to generate power necessary to operate the second non-volatile memory and the second controller. In response to a read command from the first external device, the first controller performs a read of one of the content items based on the updated permission information.Type: GrantFiled: February 22, 2017Date of Patent: November 19, 2019Assignee: Toshiba Memory CorporationInventors: Shigeto Endo, Michio Ido, Keisuke Sato, Masaomi Teranishi
-
Patent number: 10481816Abstract: Apparatus, systems, methods, and computer program products for providing dynamically assignable data latches are disclosed. A non-volatile memory die includes a non-volatile memory medium. A plurality of sets of data latches of a non-volatile memory die are configured to facilitate transmission of data to and from a non-volatile memory medium, and each of the sets of data latches are associated with a different identifier. An on-die controller is in communication with a sets of data latches. An on-die controller is configured to receive a first command for a first memory operation comprising a selected identifier. An on-die controller is configured to execute a first memory operation on a non-volatile memory medium using a set of data latches of a plurality of sets of data latches, and the set of data latches is associated with a selected identifier.Type: GrantFiled: October 18, 2017Date of Patent: November 19, 2019Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Mark Shlick, Hadas Oshinsky, Amir Shaharabany, Yoav Markus
-
Patent number: 10459792Abstract: A method for a dispersed storage network begins by receiving one or more revisions of a data object for storage within a time frame and facilitating, for each revision of the one or more revisions, storage of the revision in the selected primary storage target including at least some encoded data slices of each set of encoded data slices of a plurality of sets of encoded data slices are stored in the selected primary storage target and, for each of the revisions, facilitating subsequent storage of remaining encoded data slices of each set of encoded data slices that were not stored in the selected primary storage target, and determining to store the remaining encoded data slices in another storage target, identifying a most recently stored revision of the data object and facilitating storage of the remaining encoded data slices of the most recently stored revision in the other storage target.Type: GrantFiled: February 23, 2018Date of Patent: October 29, 2019Assignee: PURE STORAGE, INC.Inventor: Jason K. Resch
-
Patent number: 10452539Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for performing actions during simulation of an application interacting with a hybrid memory system, actions including providing a first range of virtual addresses corresponding to a first type of memory in the hybrid memory system, and a second range of virtual addresses corresponding to a second type of memory in the hybrid memory system, receiving a data packet that is to be stored in the hybrid memory system, determining a virtual address assigned to the data packet, the virtual address being provided in cache block metadata associated with the data packet, and storing the data packet in one of the first type of memory and the second type of memory in the hybrid memory system based on the virtual address, the first range of virtual addresses, and the second range of virtual addresses.Type: GrantFiled: July 19, 2016Date of Patent: October 22, 2019Assignee: SAP SEInventor: Ahmad Hassan
-
Patent number: 10430340Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: GrantFiled: March 23, 2017Date of Patent: October 1, 2019Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Patent number: 10423343Abstract: An information processing device includes a main memory including a non-volatile memory and a volatile memory with access speed higher than the non-volatile memory, the volatile memory storing data in the non-volatile memory, a processor that issues a read request, a write request and a snapshot request and a memory controller that reads, in response to the read request, data in the volatile memory, writes, in response to the write request, write data in the volatile memory and also writes a write history in a sequential manner to the non-volatile memory, performs, in response to the snapshot request, snapshot processing of recording in non-volatile memory a write position of the write history up to a time of a snapshot, and performs, after the snapshot processing, data restoration processing of writing the written data at the write position in the write history to the non-volatile memory.Type: GrantFiled: June 8, 2017Date of Patent: September 24, 2019Assignee: FUJITSU LIMITEDInventor: Mitsuru Sato
-
Patent number: 10417134Abstract: A cache memory may be configured to store a plurality of lines, where each line includes data and metadata. A circuit may be configured to determine a respective number of edges associated with each vertex of a plurality of vertices included in a graph data structure, and sort the graph data structure using the respective number of edges. The circuit may be further configured to determine a reuse value for a particular vertex of the plurality of vertices using a respective address associated with the particular vertex in the sorted graph, and store data and metadata associated with the particular vertex in a particular line of the plurality of lines in the cache memory.Type: GrantFiled: February 23, 2017Date of Patent: September 17, 2019Assignee: Oracle International CorporationInventors: Priyank Faldu, Jeffrey Diamond, Avadh Patel
-
Patent number: 10402321Abstract: Provided are a computer program product, system, and method for determining the location for volumes of data being initially stored within a storage space, regardless of the physical location of the data. The storage space includes stripes composed of volumes, which can be logically represented as a utilization histogram of stripe locations offset from one another. Sometime the stripes are fully allocated with one large volume or partially allocated with multiple, arbitrary-sized smaller volumes. When there are multiple smaller volumes that do not utilize all of the available stripe space, gaps form. To minimize the creation of such gaps, when a volume of data is initially stored, a start location to place the volume of data is selected by using selection criteria as guidance.Type: GrantFiled: April 25, 2018Date of Patent: September 3, 2019Assignee: International Business Machines CorporationInventor: Michael Keller
-
Patent number: 10380031Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.Type: GrantFiled: November 27, 2017Date of Patent: August 13, 2019Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
-
Patent number: 10346300Abstract: In one embodiment, a processor comprises: at least one core formed on a die to execute instructions; a first memory controller to interface with an in-package memory; a second memory controller to interface with a platform memory to couple to the processor; and the in-package memory located within a package of the processor, where the in-package memory is to be identified as a more distant memory with respect to the at least one core than the platform memory. Other embodiments are described and claimed.Type: GrantFiled: June 21, 2017Date of Patent: July 9, 2019Assignee: Intel CorporationInventors: Avinash Sodani, Robert J. Kyanko, Richard J. Greco, Andreas Kleen, Milind B. Girkar, Christopher M. Cantalupo
-
Patent number: 10339063Abstract: A processor includes an operations scheduler to schedule execution of operations at, for example, a set of execution units or a cache of the processor. The operations scheduler periodically adds sets of operations to a tracking array, and further identifies when an operation in the tracked set is blocked from execution scheduling in response to, for example, identifying that the operation is dependent on another operation that has not completed execution. The processor further includes a counter that is adjusted each time an operation in the tracking array is blocked from execution, and is reset each time an operation in the tracking array is executed. When the value of the counter exceeds a threshold, the operations scheduler prioritizes the remaining tracked operations for execution scheduling.Type: GrantFiled: July 19, 2016Date of Patent: July 2, 2019Assignee: Advanced Micro Devices, Inc.Inventors: Paul James Moyer, Richard Martin Born