Patents Examined by Hong C Kim
  • Patent number: 10795597
    Abstract: Methods, systems, and apparatuses are described for provisioning storage devices. An example method includes specifying a logical zone granularity for logical space associated with a disk drive. The method further includes provisioning a zone of a physical space of the disk drive based at least in part on the specified logical zone granularity. The method also includes storing compressed data in the zone in accordance with the provisioning.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: October 6, 2020
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventor: Timothy R. Feldman
  • Patent number: 10789005
    Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: September 29, 2020
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
  • Patent number: 10783091
    Abstract: The present disclosure concerns a memory access control system comprising: a processing device capable of operating in a plurality of operating modes, and of accessing a memory using a plurality of address aliases; and a verification circuit configured: to receive, in relation with a first read operation of a first memory location in the memory, an indication of a first of said plurality of address aliases associated with the first read operation; to verify that a current operating mode of the processing device permits the processing device to access the memory using the first address alias; to receive, during the first read operation, a first marker stored at the first memory location; and to verify, based on the first marker and on the first address alias, that the processing device is permitted to access the first memory location.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: September 22, 2020
    Assignee: STMICROELECTRONICS (ROUSSET) SAS
    Inventor: Fabrice Romain
  • Patent number: 10783069
    Abstract: A data storage device utilized for storing a plurality of data, wherein the data storage device includes a memory and a controller. The memory includes a plurality of blocks, and each of the blocks includes a plurality of physical pages. The controller is coupled to the memory and maps the logical pages to the physical pages of the memory, and it performs a leaping linear search for the logical pages. The controller searches the Nth logical page of the logical pages according to a predetermined value N. N is a positive integer greater than 1. When the Nth logical page is a currently-used logical page, the controller incrementally decreases the predetermined value N to keep searching the logical pages until a non-currently-used logical page is detected.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: September 22, 2020
    Assignee: SILICON MOTION, INC.
    Inventor: Chiu-Han Chang
  • Patent number: 10776021
    Abstract: A system includes a memory, a processor, a hypervisor, and a guest supervisor. The hypervisor is configured to allocate a memory page for each page table of a set of page tables and map each memory page at the same address in each page table. The memory pages store an identification value identifying the respective page table. The guest supervisor is configured to receive control from an application operating on a first page table; retrieve a first identification value associated with the first page table; store the first identification value in guest memory; switch, at a first time, from the first page table to a second page table of the set of page tables; retrieve the first identification value stored in the guest memory; and switch, at a second time, control back to the application.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: September 15, 2020
    Assignee: RED HAT, INC.
    Inventor: Michael Tsirkin
  • Patent number: 10761775
    Abstract: According to some example embodiments, a method includes receiving, a first command from a host device; determining, if the first command is part of an association group of commands by determining a first value of a first parameter of the first command in an association context table entry is greater than zero, the first parameter including a total number of commands in the association group of commands; determining, a first value of a second parameter of the first command, the second parameter including a tag value identifying the association group of commands; decrementing, the first value of the first parameter of the first command in the association context table entry; determining, if the first value of the first parameter in the association context table entry is zero; and executing, an action indicated in a third parameter of the first command.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: September 1, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ramdas P. Kachare, Oscar P. Pinto, Xuebin Yao, Wentao Wu, Stephen G. Fischer, Fred Worley
  • Patent number: 10761988
    Abstract: Aspects of the present disclosure relate to an apparatus comprising a data array having locality-dependent latency characteristics such that an access to an open unit of the data array has a lower latency than an access to a closed unit of the data array. Set associative cache indexing circuitry determines, in response to a request for data associated with a target address, a cache set index. Mapping circuitry identifies, in response to the index, a set of data array locations corresponding to the index, according to a mapping in which a given unit of the data array comprises locations corresponding to a plurality of consecutive indices, and at least two locations of the set of locations corresponding to the same index are in different units of the data array. Cache access circuitry accesses said data from one of the set of data array locations.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: September 1, 2020
    Assignee: Arm Limited
    Inventors: Radhika Sanjeev Jagtap, Nikos Nikoleris, Andreas Lars Sandberg, Stephan Diestelhorst
  • Patent number: 10754567
    Abstract: In one embodiment, when a secondary application on an electronic device is selected for deactivation, the memory associated with the application can be gathered, compacted and compressed into a memory freezer file. The memory freezer file can be stored in non-volatile memory with a reduced storage footprint compared to a memory stored in a conventional swap file. When the selected application is to be reactivated, the compressed memory in the memory freezer file can be quickly restored to process memory.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: August 25, 2020
    Assignee: Apple Inc.
    Inventors: Andrew D. Myrick, Lionel D. Desai, Joseph Sokol, Jr.
  • Patent number: 10757452
    Abstract: An adaptive stream segment prefetcher changes the number of segments it prefetches following a client requested segment of the same stream based on conditions associated with that stream at prefetch time. The adaptive prefetcher increases or decreases the number of segments to prefetch for a particular stream based on the number of active or concurrent clients requesting that particular stream, based on the playback duration of the particular stream by one or more clients, or some combination of both. The adaptive prefetcher continuously monitors the conditions associated with the stream such that number of segments prefetched at a first time are greater or less than the number of segments prefetched at a later second time.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: August 25, 2020
    Assignee: Verizon Digital Media Services Inc.
    Inventor: Ravikiran Patil
  • Patent number: 10732903
    Abstract: A sub-LUN ownership mapping for multiple storage controllers of a first storage array is generated. The sub-LUN ownership mapping indicates ownership of sub-LUNs by the multiple storage controllers of the first storage array. The sub-LUN ownership mapping is transmitted to a storage controller of a second storage array. A request to align sub-ownership is sent to the storage controller of the second storage array. Ownership is aligned for one or more sub-LUNS for multiple storage controllers of the second storage array.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: August 4, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ayman Abouelwafa, Sheridan Clark Kooyers
  • Patent number: 10725915
    Abstract: Disclosed herein are methods, systems, and processes to provide coherency across disjoint caches in clustered environments. It is determined whether a data object is owned by an owner node, where the owner node is one of multiple nodes of a cluster. If the owner node for the data object is identified by the determining, a request is sent to the owner node for the data object. However, if the owner node for the data object is not identified by the determining, selects a node in the cluster is selected as the owner node, and the request for the data object is sent to the owner node.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: July 28, 2020
    Assignee: Veritas Technologies LLC
    Inventors: Bhushan Jagtap, Mark Hemment, Anindya Banerjee, Ranjit Noronha, Jitendra Patidar, Kundan Kumar, Sneha Pawar
  • Patent number: 10719254
    Abstract: A data storage device includes a memory device and a controller. The memory device includes multiple memory blocks. The memory blocks include single-level cell blocks and multiple-level cell blocks. The controller is coupled to the memory device. When the controller executes a predetermined procedure to write data stored in the single-level cell blocks into the multiple-level cell blocks, the controller is configured to determine whether a valid page count corresponding to each single-level cell block is greater than a threshold, and when the valid page count corresponding to more than one single-level cell block is greater than the threshold, the controller is configured to execute a first merge procedure to directly write the data stored in the single-level cell blocks with the valid page count greater than the threshold into one or more of the multiple-level cell blocks.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: July 21, 2020
    Assignee: Silicon Motion, Inc.
    Inventors: Wen-Sheng Lin, Yu-Da Chen
  • Patent number: 10719441
    Abstract: An electronic device handles memory access requests for data in a memory. The electronic device includes a memory controller for the memory, a last-level cache memory, a request generator, and a predictor. The predictor determines a likelihood that a cache memory access request for data at a given address will hit in the last-level cache memory. Based on the likelihood, the predictor determines: whether a memory access request is to be sent by the request generator to the memory controller for the data in parallel with the cache memory access request being resolved in the last-level cache memory, and, when the memory access request is to be sent, a type of memory access request that is to be sent. When the memory access request is to be sent, the predictor causes the request generator to send a memory request of the type to the memory controller.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: July 21, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Jieming Yin, Yasuko Eckert, Matthew R. Poremba, Steven E. Raasch, Doug Hunt
  • Patent number: 10705927
    Abstract: According to examples, a system may include an upstream volume controller having: a processor and a non-transitory machine-readable storage medium. The storage medium may include instructions executable by the processor to freeze an upstream volume, the upstream volume being in a replication set with a downstream volume, receive a snapshot creation request, create a snapshot of the upstream volume, and send one of a snapshot permit message or a snapshot abort message to a downstream volume processor. The instructions may also be executable by the processor to unfreeze the upstream volume responsive to at least one of the sending of the one of the snapshot permit message or the snapshot abort message or expiration of a timeout corresponding to a maximum time period during which the upstream volume is to remain frozen.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: July 7, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Praveen Killamsetti, Tomasz Barszczak, Monil Mukesh Sanghavi
  • Patent number: 10664407
    Abstract: A set of data entries is transferred via a memory mapped interface from an external peripheral device to a processor device and is stored in a shared memory region. Based on a first pointer to the shared memory region, a first process executed by the processor device processes a first group of the data entries. Based on a second pointer to the shared memory region, a second process executed by the processor device processes a second group of the data entries. The second process indicates the second pointer to the first process. The first process indicates a lower one of the first pointer and the second pointer to the peripheral device.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: May 26, 2020
    Assignee: Intel Corporation
    Inventors: Anant Raj Gupta, Ingo Volkening, Jun Ye Zhou
  • Patent number: 10642755
    Abstract: Provided are a computer program product, system, and method for invoking demote threads on processors to demote tracks from a cache. A plurality of demote ready lists indicate tracks eligible to demote from the cache. In response to determining that a number of free cache segments in the cache is below a free cache segment threshold, a determination is made of a number of demote threads to invoke on processors based on the number of free cache segments and the free cache segment threshold. The determined number of demote threads are invoked to demote tracks in the cache indicated in the demote ready lists, wherein each invoked demote thread processes one of the demote ready lists to select tracks to demote from the cache to free cache segments in the cache.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: May 5, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
  • Patent number: 10628317
    Abstract: Systems and methods are disclosed herein for caching data in a virtual storage environment. An exemplary method comprises monitoring, by a hardware processor, operations on a virtual storage device, identifying, by a hardware processor, transitions between blocks of the virtual storage device that have the operations performed thereon, determining, by a hardware processor, a relationship between each of the blocks based on the identified transitions, clustering the blocks into groups of related blocks based on the relationship and applying, by a hardware processor, one of a plurality of different caching policies to blocks in each of the groups based on clustering.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: April 21, 2020
    Assignee: PARALLELS INTERNATIONAL GMBH
    Inventors: Anton Zelenov, Nikolay Dobrovolskiy, Serguei Beloussov
  • Patent number: 10628312
    Abstract: A data processing system including a cache operably coupled to an interconnect and a cache controller. The cache is accessible by each bus initiator of a plurality of bus initiators. The cache includes a plurality of entries. Each entry includes a status field having coherency bits. When an entry of the plurality of entries is in a first protocol mode, the cache controller uses the coherency bits of the entry in implementing a first cache coherency protocol for data of the entry. When the entry is in a second protocol mode, the cache controller uses the coherency bits of the entry in implementing a second cache coherency protocol. The second cache coherency protocol is utilized in implementing a paced data transfer operation between a first bus initiator of the plurality of bus initiators and a second bus initiator of the plurality of bus initiators using the cache entry.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: April 21, 2020
    Assignee: NXP USA, Inc.
    Inventors: Paul Kimelman, Brian Christopher Kahne, Ehud Kalekin
  • Patent number: 10620856
    Abstract: Example methods are provided for a first host to perform Input Output (I/O) fencing in a shared virtual storage environment. One example method may comprise determining that is required to fence off a second node from a first virtual disk, and obtaining persistent reservation information associated with the first virtual disk. The persistent reservation information may include a first key associated with a first path between a first node and the first virtual disk, and a second key associated with a second path between the second node and the first virtual disk. The method may also comprise identifying the second key associated with the second path; and blocking I/O access by the second node to the first virtual disk using the second key associated with the second path, thereby fencing off the second node from the first virtual disk.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: April 14, 2020
    Assignee: VMWARE, INC.
    Inventors: Rahul Dev, Gautham Swamy
  • Patent number: 10613975
    Abstract: A memory device and a dynamic garbage collection method thereof are provided. The method includes receiving a minimum operating speed, ascertaining a reference valid page count ratio (VPC), using a maximum operating speed, the minimum operating speed, and a garbage collection speed, the reference VPC ratio being ascertained by the following formula 1 and determining whether to perform a garbage collection, using the reference VPC ratio and a current average VPC ratio. <Formula 1> Vr=Gp (Jp?Mp)/(Jp*Mp+(Gp*(Jp?Mp))) Here, Vr is the reference VPC ratio, Gp is the garbage collection speed, Jp is the maximum operating speed, and Mp is the minimum operating speed.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: April 7, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang Won Jung, Young Pil Song, Ju-Young Lee, Jun Ho Ahn, Bum Hee Lee, Sung-Hyun Cho, In Tae Hwang