Patents Examined by Hiep Nguyen
  • Patent number: 9996299
    Abstract: A data storage device may be configured to write first data to a first set of storage elements of a non-volatile memory and to write second data to a second set of storage elements of the non-volatile memory. The first data may be processed by a data shaping operation, and the second data may not be processed by the data shaping operation. The data storage device may be further configured to read a representation of the second data from the second set of storage cells and to determine a block health metric of a portion of the non-volatile memory based on the representation of the second data. The portion may include the first set of storage elements and the second set of storage elements. As an illustrative, non-limiting example, the first portion may be a first block of the non-volatile memory.
    Type: Grant
    Filed: October 27, 2015
    Date of Patent: June 12, 2018
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC
    Inventors: Nian Niles Yang, Idan Alrod
  • Patent number: 9996284
    Abstract: A memory system that has a multi-channel volatile memory subsystem is coupled to a non-volatile memory subsystem to provide independent, configurable backup of data. The volatile memory subsystem has one or more main memory modules that use a form of volatile memory such as DRAM memory, for which the NV subsystem provides selective persistent backup. The main memory modules are dual in-line memory modules or DIMMs using DDR SDRAM memory devices. The non-volatile memory subsystem (NV backup) includes an NV controller and non-volatile memory NVM. The NV backup can also include a memory cache to aid with handling and storage of data. In certain embodiments, the NV controller and the non-volatile memory are coupled to the one or more DIMM channels of the main memory via associated signal lines. Such signal lines can be, for example, traces on a motherboard, and may include one or more signal buses for conveying data, address, and/or control signals.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: June 12, 2018
    Assignee: Netlist, Inc.
    Inventor: Hyun Lee
  • Patent number: 9990400
    Abstract: Techniques are disclosed relating to an in-memory cache. In some embodiments, in response to determining that data for a requested entry is not present in the cache (e.g., because it has been evicted), a computing system is configured to invoke cached program code associated with the entry. In some embodiments, the computing system is configured to provide data generated by the program code in response to requests that indicate the entry. In some embodiments, the computing system is configured to store the generated data in the cache. In various embodiments, this may avoid cache misses and provide configurability in responding to requests to access the cache.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: June 5, 2018
    Assignee: salesforce.com, inc.
    Inventors: Barathkumar Sundaravaradan, Christopher James Wall, Lawrence Thomas Lopez, Paul Sydell, Sreeram Duvur, Vijayanth Devadhar
  • Patent number: 9984002
    Abstract: Techniques are disclosed relating to an in-memory, software-managed cache configured to store web application data. In some embodiments, operations to cache data specify a visibility parameter for the data, among a plurality of namespaces. In some embodiments, requests to access cached data are checked, based on a request's namespace and the visibility parameter for the cached data, to determine whether they are allowed to proceed. In some embodiments, this may facilitate caching data using shared computing systems and data structures while maintaining configurable privacy for cached data.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: May 29, 2018
    Assignee: salesforce.com, inc.
    Inventors: Barathkumar Sundaravaradan, Christopher James Wall, Lawrence Thomas Lopez, Paul Sydell, Sreeram Duvur, Vijayanth Devadhar
  • Patent number: 9977744
    Abstract: A memory system includes a memory device of lower read operation speed; a memory cache of higher read operation speed; and a controller suitable for: setting one of access patterns to the memory device defined by pairs of former and latter addresses provided to the memory system within a set input time interval as a prefetch pattern; performing a prefetch operation of caching data corresponding to the latter address from the memory device to the memory cache according to the prefetch pattern; and reading the cached data from the memory cache in response to a read command provided with the latter address of the prefetch pattern.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: May 22, 2018
    Assignee: SK Hynix Inc.
    Inventor: Hae-Gi Choi
  • Patent number: 9972361
    Abstract: Various embodiments for audibly mapping computing components in a computer storage system, by a processor device, are provided. In one embodiment, a method comprises creating a detectible audible pattern using an actuator arm and head assembly of a hard disk drive operating in the computer storage system for physically mapping the hard disk drive to a logical location within the computer storage system.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: May 15, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Itzhack Goldberg, Harry McGregor, Christopher B. Moore, Neil Sondhi
  • Patent number: 9965393
    Abstract: A multi-core processor providing heterogeneous processor cores and a shared cache is presented.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: May 8, 2018
    Assignee: Intel Corporation
    Inventors: Frank T. Hady, Mason Cabot, Mark B. Rosenbluth, John Beck
  • Patent number: 9959064
    Abstract: A primary write request that is to modify a primary portion of primary data stored in a primary storage node is received. The primary write request is to be replicated to create a current secondary write request. The current secondary write request is to modify a current secondary portion of secondary data that is stored in a secondary storage node. A current data range of the current secondary portion is determined. A determination is made of whether a previous secondary write request is in process of modifying a previous data range that at least partially overlaps with a current data range of the current secondary portion. Execution of the primary write request is suspended, until the previous secondary write request has completed updating the secondary storage node.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: May 1, 2018
    Assignee: NetApp, Inc.
    Inventors: Manoj V. Sundararajan, Ching-Yuk Paul Ngan, Yuedong Mu, Susan M. Coatney
  • Patent number: 9952790
    Abstract: In one embodiment, a method includes receiving, at a first host, a security profile related to a first data socket descriptor indicating risk to data security of a second host. The method also includes, in response to the risk indicated by the security profile, performing by the first host, at least one action selected from a group of actions. The group of actions includes a cache flush on a cache of the first host according to a cache flush policy, cache locking on data stored in the cache of the first host, data redaction on data of a payload prior to being sent by the first host, memory locking of data stored in an in-memory database of the first host, and encryption of data stored in the in-memory database of the first host or encryption of selected data fields of a payload prior to being sent from the first host.
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: April 24, 2018
    Assignee: AVOCADO SYSTEMS INC.
    Inventor: Keshav Govind Kamble
  • Patent number: 9952753
    Abstract: Predicting what content items a user finds important and sending those items to a cache on the user's device at times when doing so will not drain resources and will not result in expensive data rates. Applying a ranking function that examines recency and other content metadata associated with the user's content items stored in a synchronized content management system. Determining how much of a ranked list of content items to cache and deciding when is a good time to send content items to the local cache.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: April 24, 2018
    Assignee: Dropbox, Inc.
    Inventors: Daniel Kluesing, Rasmus Andersson
  • Patent number: 9946646
    Abstract: A system for managing cache utilization includes a processor core, a lower-level cache, and a higher-level cache. In response to activating the higher-level cache, the system counts lower-level cache victims evicted from the lower-level cache. While a count of the lower-level cache victims is not greater than a threshold number, the system transfers each lower-level cache victim to a system memory without storing the lower-level cache victim to the higher-level cache. When the count of the lower-level cache victims is greater than the threshold number, the system writes each lower-level cache victim to the higher-level cache. In this manner, if the higher-level cache is deactivated before the threshold number of lower-level cache victims is reached, the higher-level cache is empty and thus may be deactivated without flushing.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: April 17, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: William L. Walker
  • Patent number: 9947377
    Abstract: Providing memory training of dynamic random access memory (DRAM) systems using port-to-port loopbacks, and related methods, systems, and apparatuses are disclosed. In one aspect, a first port within a DRAM system is coupled to a second port via a loopback connection. A signal is sent to the first port from a System-on-Chip (SoC), and passed to the second port through the loopback connection. The signal is then returned to the SoC, where it may be examined by a closed-loop engine of the SoC. A result corresponding to a hardware parameter may be recorded, and the process may be repeated until an optimal result for the hardware parameter is achieved at the closed-loop engine. By using a port-to-port loopback configuration, the DRAM system parameters regarding timing, power, and other parameters associated with the DRAM system may be trained more quickly and with lower boot memory usage.
    Type: Grant
    Filed: June 14, 2017
    Date of Patent: April 17, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Vaishnav Srinivas, Michael Joseph Brunolli, Dexter Tamio Chun, David Ian West
  • Patent number: 9898212
    Abstract: Some of the embodiments of the present disclosure provide a method for programming a flash memory having a plurality of memory blocks, wherein each memory block of the plurality of memory blocks is either a single-level cell (SLC) memory block or a multi-level cell (MLC) memory block, the method comprising assigning a weighting factor to each memory block of the plurality of memory blocks based on whether the memory block is an SLC memory block or an MLC memory block, tracking a number of write-erase cycles for each memory block, and selecting one or more memory blocks for writing data based at least in part on the weighting factor and the tracked number of write-erase cycles of each memory block of the plurality of memory blocks. Other embodiments are also described and claimed.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: February 20, 2018
    Assignee: Marvell International Ltd.
    Inventors: Joseph Sheredy, Lau Nguyen
  • Patent number: 9891837
    Abstract: According to one embodiment, a memory system includes a memory and a memory controller. The memory includes a first buffer and a memory cell array. The memory controller includes a second buffer for receiving first data from a host. The memory controller transfers the first data to the first buffer without accumulating a predetermined size of the first data in the second buffer. The memory controller creates second data in the first buffer and programs the second data created in the first buffer into the memory cell array. The second data is formed of a plurality of third data. The third data is first data received from the memory controller by the memory. The size of the second data is equal to a size of a unit in which to program into the memory cell array.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: February 13, 2018
    Assignee: Toshiba Memory Corporation
    Inventors: Yoshihisa Kojima, Tatsuhiro Suzumura, Tokumasa Hara, Hiroyuki Moro, Yohei Hasegawa, Yoshiki Saito
  • Patent number: 9886361
    Abstract: A method for defragmenting volumes in a mirrored system. The method includes determining that a defragmentation process has been performed on a second server. The method further includes storing a before and after mapping of a second set of tracks, wherein the before and after mapping includes information identifying at least one track of the second set of tracks and a corresponding first location of the respective track before the performing of the defragmentation process and a second location of the respective track after the performing of the defragmentation process and sending an indication to a first server to relocate at least one track of a first set of tracks on the first server from a first location on the first server to a second location on the first server according to the stored before and after mapping of the corresponding second set of tracks on the second server.
    Type: Grant
    Filed: October 4, 2016
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Nikhil Khandelwal, Gregory E. McBride, David C. Reed, Richard A. Welp
  • Patent number: 9886388
    Abstract: Methods, systems, computer-readable media, and apparatuses may provide management of virtual memory. For instance, aspects described herein relate to dynamic generation of nodes in a binary search tree in response to a write command, with each of its nodes being representative of different memory ranges in the virtual system disk. Each node may be associated with a different record in a global linked list, ordered by offset that includes pointers to locations where blocks are stored in a virtual cache and offsets of locations where blocks are stored in the virtual overflow disk. Aspects described herein relate to reading blocks from a virtual system memory to service a read command without storing the blocks in the virtual cache.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: February 6, 2018
    Assignee: Citrix Systems, Inc.
    Inventor: Ajai Kumar Bassi
  • Patent number: 9880941
    Abstract: The present disclosure relates to sharing a context on a coherent hardware accelerator among multiple processes. According to one embodiment, in response to a first process requesting to create a shared memory space, a system creates a shared hardware context on the coherent hardware accelerator and binds the first process and the shared memory space to the hardware context. In response to the first process spawning one or more second processes, the system binds the one or more second processes to the shared memory space and the hardware context. Subsequently, the system performs one or more operations initiated by the first process or one of the one or more second processes on the coherent hardware accelerator according to the bound hardware context.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: January 30, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bruce Mealey, Mark D. Rogers
  • Patent number: 9881013
    Abstract: A system, apparatus, method, or computer program product of restricting file access is disclosed wherein a set of file write access commands are determined from data stored within a storage medium. The set of file write access commands are for the entire storage medium. Any matching file write access command provided to the file system for that storage medium results in an error message. Other file write access commands are, however, passed onto a device driver for the storage medium and are implemented. In this way commands such as file delete and file overwrite can be disabled for an entire storage medium.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: January 30, 2018
    Assignee: KOM Software Inc.
    Inventor: Kamel Shaath
  • Patent number: 9858009
    Abstract: Data that is initially stored in Single Level Cell (SLC) blocks is subsequently copied (folded) to a Multi Level Cell (MLC) block where the data is stored in MLC format, the data copied in a minimum unit of a fold-set, the MLC block including a plurality of separately-selectable sets of NAND strings, data of an individual fold-set copied exclusively to two or more word lines of an individual separately-selectable set of NAND strings in the MLC block.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: January 2, 2018
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Abhijeet Bhalerao, Mrinal Kochar, Dennis S. Ea, Mikhail Palityka, Aaron Lee, Yew Yin Ng, Ivan Baran
  • Patent number: 9858187
    Abstract: Techniques are disclosed relating to an in-memory cache for web application data. In some embodiments, received transactions include multiple operations, including one or more cache operations to access the in-memory cache. In some embodiments, transactions are performed atomically. In some embodiments, data for the one or more cache operations is stored locally in memory by an application server outside of the in-memory cache until the transaction is successfully completed. This may improve performance and facilitate atomicity, in some embodiments.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: January 2, 2018
    Assignee: salesforce.com, inc.
    Inventors: Barathkumar Sundaravaradan, Christopher James Wall, Lawrence Thomas Lopez, Paul Sydell, Sreeram Duvur, Vijayanth Devadhar