Patents Examined by Jane Wei
  • Patent number: 10248559
    Abstract: The present disclosure provides a weighting-type data relocation control device for controlling data relocation of a non-volatile memory which includes used blocks and unused blocks. Each used block is associated with a first parameter and a second parameter. The control device executes the following steps: multiplying the first and second parameters by a first and a second weightings respectively to obtain a priority index, in which at least one of the parameters and/or at least one of the weightings relate(s) to a thermal detection result; comparing the priority index with at least a threshold to obtain a comparison result; and if the comparison result corresponding to a used storage block of the used blocks reaches a predetermined threshold, transferring valid data of the used storage block to one of the unused blocks.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: April 2, 2019
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Yen-Chung Chen, Chih-Ching Chien, Fu-Hsin Chen
  • Patent number: 10209889
    Abstract: A server logical partition (LPAR) of a virtualized computer includes shared memory regions (SMRs). The SMRs include pages of the server LPAR memory to share with client LPARs. A hypervisor utilizes an export vector to associate logical pages of the server LPAR with SMRs. The hypervisor further utilizes a reference array to associate SMRs with client LPARs that have mapped at least one physical memory page of the SMR from a logical page of the client LPAR memory. In processing an operation to unmap one or more shared physical pages from one or more LPARs, the hypervisor uses the export vector and reference array to determine which LPARs have had a mapping to the physical pages.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: February 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ramanjaneya S. Burugula, Niteesh K. Dubey, Joefon Jann, Pratap C. Pattnaik, Hao Yu
  • Patent number: 10203884
    Abstract: A disclosed example to use an erase-suspend feature with a memory device includes sending, by a memory host controller, an erase-suspend enable setting and an erase segment duration value to the memory device. The erase-suspend enable setting is to cause the memory device to perform an erase operation as a plurality of erase segments and to suspend the erase operation between the erase segments. The erase segment duration value is to specify a length of time for the erase segments. The memory host controller initiates an erase operation to be performed at the memory device. When the erase operation is suspended, the memory host controller initiates a second memory operation to be performed at the memory device. After the memory host controller determines that the second memory operation is complete, the memory host controller initiates resumption of the erase operation.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: February 12, 2019
    Assignee: Intel Corporation
    Inventors: Aliasgar S. Madraswala, Yogesh B. Wakchaure, Camila Jaramillo, Trupti Bemalkhedkar
  • Patent number: 10191849
    Abstract: A cache is sized using an ordered data structure having data elements that represent different target locations of input-output operations (IOs), and are sorted according to an access recency parameter. The cache sizing method includes continually updating the ordered data structure to arrange the data elements in the order of the access recency parameter as new IOs are issued, and setting a size of the cache based on the access recency parameters of the data elements in the ordered data structure. The ordered data structure includes a plurality of ranked ring buffers, each having a pointer that indicates a start position of the ring buffer. The updating of the ordered data structure in response to a new IO includes updating one position in at least one ring buffer and at least one pointer.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: January 29, 2019
    Assignee: VMware, Inc.
    Inventors: Jorge Guerra Delgado, Wenguang Wang
  • Patent number: 10176115
    Abstract: A server LPAR operating in a virtualized computer shares pages with client LPARs using a shared memory region (SMR). A virtualization function of the computer receives a get-page-ID request associated with a client LPAR to identify a physical page corresponding to a shared page included in the SMR. The virtualization function requests the server LPAR to provide an identity of the physical page. The virtualization function receives a page-ID response comprising the identity of a server LPAR logical page that corresponds to the physical page. The virtualization element determines a physical page identity and communicates the physical page identity to the client LPAR. The virtualization element receives a page ID enter request and enters an identity of the physical page into a translation element of the computer to associate a client LPAR logical page with the physical page.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: January 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ramanjaneya S. Burugula, Niteesh K. Dubey, Joefon Jann, Pratap C. Pattnaik, Hao Yu
  • Patent number: 10169235
    Abstract: In an embodiment, an apparatus includes control circuitry and a memory configured to store a plurality of access instructions. The control circuitry is configured to determine an availability of a resource associated with a given access instruction of the plurality of access instructions. The associated resource is included in a plurality of resources. The control circuitry is also configured to determine a priority level of the given access instruction in response to a determination that the associated resource is unavailable. The control circuit is further configured to add the given access instruction to a subset of the plurality of access instructions in response to a determination that the priority level is greater than a respective priority level of each access instruction in the subset. The control circuit is also configured to remove the given access instruction from the subset in response to a determination that the associated resource is available.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: January 1, 2019
    Assignee: Apple Inc.
    Inventors: Bikram Saha, Harshavardhan Kaushikkar, Wolfgang H. Klingauf
  • Patent number: 10146476
    Abstract: According to one embodiment, a wireless communication device includes a first interface, a first memory, a wireless antenna, a second memory and a second interface. The first interface is capable to electrically connect to a first host device. The first interface communicates with the first host device in accordance with an SD interface. The first memory includes a nonvolatile memory which operates based on power supplied through the first interface from the first host device. The wireless antenna generates power based on a radio wave from a second host device. The second memory is capable to operate based on power generated by the wireless antenna. The second memory has a memory capacity lower than the first memory. The second interface is capable to operate based on power generated by the wireless antenna. The second interface is connected to the second memory and the first interface.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: December 4, 2018
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Keisuke Sato
  • Patent number: 10120581
    Abstract: Aspects for generating compressed data streams with lookback pre-fetch instructions are disclosed. A data compression system is provided and configured to receive and compress an uncompressed data stream as part of a lookback-based compression scheme. The data compression system determines if a current data block was previously compressed. If so, the data compression system is configured to insert a lookback instruction corresponding to the current data block into the compressed data stream. Each lookback instruction includes a lookback buffer index that points to an entry in a lookback buffer where decompressed data corresponding to the data block will be stored during a separate decompression scheme. Once the data blocks have been compressed, the data compression system is configured to move a lookback buffer index of each lookback instruction in the compressed data stream into a lookback pre-fetch instruction located earlier than the corresponding lookback instruction in the compressed data stream.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: November 6, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Amin Ansari, Vito Remo Bica, Jinxia Bai
  • Patent number: 10114755
    Abstract: A system, method, and computer program product for warming a cache for a task launch is described. The method includes the steps of receiving a task data structure that defines a processing task, extracting information stored in a cache warming field of the task data structure, and, prior to executing the processing task, generating a cache warming instruction that is configured to load one or more entries of a cache storage with data fetched from a memory.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: October 30, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Scott Ricketts, Nicholas Wang, Shirish Gadre, Gentaro Hirota, Robert Ohannessian, Jr.
  • Patent number: 10048865
    Abstract: Embodiments are directed to dynamically changing partitions size for a partition in a storage device and to transferring storage space between partitions in a storage device. A computer system identifies portions of free space on a storage device, where the storage device has at least one partition whose offset and length are stored in a partition table. The computer system determines where the identified free space is located relative to other storage locations on the storage device. The computer system further determines that the partition is to be dynamically resized to a new size which is specified by one or more offset and length values, and based on where the identified free space is located, dynamically transforms the partition into a logical partition, and resizes the logical partition, the logical partition's offset and length values being updated in the partition table to include the new specified offset and length values.
    Type: Grant
    Filed: October 24, 2014
    Date of Patent: August 14, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Karan Mehra, Shi Cong
  • Patent number: 10031690
    Abstract: The system, process, and methods herein describe a mechanism for creating an initial backup snapshot on deduplicated storage. Initialization IO's may be transmitted to the deduplicated storage, and those initialization IO's may be synthesized into a snapshot. Application IO's may also be transmitted in case the source side data changes while the backup is synthesized.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: July 24, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Anestis Panidis, Assaf Natanzon, Saar Cohen
  • Patent number: 9977737
    Abstract: A method and a system embodying the method for a memory address alignment, comprising configuring one or more naturally aligned buffer structure(s); providing a return address pointer in a buffer of one of the one or more naturally aligned buffer structure(s); determining a configuration of the one of the one or more naturally aligned buffer structure(s); applying a modulo arithmetic to the return address and at least one parameter of the determined configuration; and providing a stacked address pointer determined in accordance with the applied modulo arithmetic, is disclosed.
    Type: Grant
    Filed: December 25, 2013
    Date of Patent: May 22, 2018
    Assignee: Cavium, Inc.
    Inventors: Wilson Parkhurst Snyder, II, Anna Karen Kujtkowski
  • Patent number: 9940069
    Abstract: A method, article of manufacture, apparatus, and system for a paging cache is disclosed. The backup cache may be broken into pages, and a subset of these pages may be memory resident. The pages may be sequentially loaded into memory to improve cache performance.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: April 10, 2018
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Scott C. Auchmoody, Orit Levin-Michael, Scott H. Ogata
  • Patent number: 9851902
    Abstract: To produce output from a memory block, a first index is used to access a pointer, a mode select and a function select from a first memory. The pointer, the mode select and the function select are used to produce a second index. The pointer is used to produce the second index when the mode select is a first value. A function is used to produce the second index when the mode select is a second value. The function select identifies a function to be used to produce the second index. The second index is used to access output from a second memory.
    Type: Grant
    Filed: October 9, 2014
    Date of Patent: December 26, 2017
    Assignee: Memobit Technologies AB
    Inventors: Pär S Westlund, Lars-Olof B Svensson
  • Patent number: 9846580
    Abstract: An arithmetic processing device includes: a arithmetic cores, wherein the arithmetic core comprises: an instruction controller configured to request processing corresponding to an instruction; a memory configured to store lock information indicating that a locking target address is locked, the locking target address, and priority information of the instruction; and a cache controller configured to, when storing data of a first address in a cache memory to execute a first instruction including locking of the first address from the instruction controller, suppress updating of the memory if the lock information is stored in the memory and a priority of the priority information is higher than a first priority of the first instruction.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: December 19, 2017
    Assignee: FUJITSU LIMITED
    Inventors: Kenji Fujisawa, Yuji Shirahige
  • Patent number: 9804983
    Abstract: A controlling method, a connector, and a memory storage device are provided. The controlling method includes following steps. A connection between the memory storage device and a host system is established. A first command is received from the host system and stored into a command queue. The command queue includes at least one second command after the first command is stored into the command queue. Whether a command number of the second commands is greater than a threshold is determined. The threshold is greater than 1. If the command number is greater than the threshold, a using right of the connection is obtained and a second command is executed by the memory storage device. If the command number is not greater than the threshold, a command from the host system is waited for. The using right of the connection belongs to the host system. Thereby, the system efficiency is improved.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 31, 2017
    Assignee: PHISON ELECTRONICS CORP.
    Inventors: Ming-Hui Tseng, Kian-Fui Seng
  • Patent number: 9798574
    Abstract: Examples are disclosed for composing memory resources across devices. In some examples, memory resources associated with executing one or more applications by circuitry at two separate devices may be composed across the two devices. The circuitry may be capable of executing the one or more applications using a two-level memory (2LM) architecture including a near memory and a far memory. In some examples, the near memory may include near memories separately located at the two devices and a far memory located at one of the two devices. The far memory may be used to migrate one or more copies of memory content between the separately located near memories in a manner transparent to an operating system for the first device or the second device. Other examples are described and claimed.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: October 24, 2017
    Assignee: INTEL CORPORATION
    Inventors: Neven M Abou Gazala, Paul S. Diefenbaugh, Nithyananda S. Jeganathan, Eugene Gorbatov
  • Patent number: 9798493
    Abstract: An interface receives a command corresponding to a non-volatile memory. The interface determines whether a bypass mode is enabled and whether the command is a medium-access command. A primary processing node processes the command in response to determining at least one of the following: that the bypass mode is disabled or that the command is not a medium-access command. A secondary processing node processes the command, in response to determining that the bypass mode is enabled and that the command is a medium-access command.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: October 24, 2017
    Assignee: International Business Machines Corporation
    Inventors: Shawn P. Authement, Christopher M. Dennett, Gowrisankar Radhakrishnan, Donald J. Ziebarth
  • Patent number: 9753853
    Abstract: Methods and systems for managing caching mechanisms in storage systems are provided where a global cache management function manages multiple independent cache pools and a global cache pool. As an example, the method includes: splitting a cache storage into a plurality of independently operating cache pools, each cache pool comprising storage space for storing a plurality of cache blocks for storing data related to an input/output (“I/O”) request and metadata associated with each cache pool; receiving the I/O request for writing a data; operating a hash function on the I/O request to assign the I/O request to one of the plurality of cache pools; and writing the data of the I/O request to one or more of the cache blocks associated with the assigned cache pool. In an aspect, this allows efficient I/O processing across multiple processors simultaneously.
    Type: Grant
    Filed: October 9, 2014
    Date of Patent: September 5, 2017
    Assignee: NETAPP, INC.
    Inventors: Arindam Banerjee, Donald R. Humlicek
  • Patent number: 9747028
    Abstract: Techniques for addressing performance degradation when a computer system is in a memory constrained state while running one or more applications are described herein. While executing, a computer system may monitor system memory and record one or more tracked sample types and may use the collection of tracked sample types to aggregate memory size equivalents to the tracked sample types and calculate a simulated memory pressure. The simulated memory pressure may be applied to the system by allocating memory from a memory manager, thereby reducing memory pressure.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: August 29, 2017
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Nicholas Alexander Allen