Patents Examined by Hamdy S. Ahmed
  • Patent number: 9417802
    Abstract: A method may include link training a plurality of back-side lanes coupling a plurality of memory chips of a memory module to a plurality of data buffers of the memory module. The method may also include link training a plurality of front-side lanes coupling the plurality of data buffers to a memory controller. The method may further include determining after link training of the back-side and front-side lanes whether signal integrity of data communicated over the front-side lanes exceeds one or more thresholds. The method may additionally include responsive to determining that the signal integrity of data communicated over one or more of the front-side lanes fails to exceed the one or more thresholds, modifying timing of data communicated over one or more of the back-side and front-side lanes in order to improve signal integrity of the one or more of the front-side lanes failing to exceed the thresholds.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: August 16, 2016
    Assignee: Dell Products L.P.
    Inventors: Stuart Allen Berke, Bhyrav M. Mutnury, Vadhiraj Sankaranarayanan
  • Patent number: 9411529
    Abstract: The present disclosure includes methods and apparatuses for mapping between program states and data patterns. One method includes mapping a data pattern to a number of program state combinations L corresponding to a group of memory cells configured to store a fractional number of data units per cell. The mapping can be based, at least partially, on a recursive expression performed in a number of operations, the number of operations based on a number of memory cells N within the group of memory cells and the number of program state combinations L.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: August 9, 2016
    Assignee: Micron Technology, Inc.
    Inventor: Chandra C. Varanasi
  • Patent number: 9390012
    Abstract: A multi-core processor system includes a processor configured to establish coherency of shared data values stored in a cache memory accessed by a multiple cores; detect a first thread executed by a first core among the cores; identify upon detecting the first thread, a second thread under execution by a second core other than the first core and among the cores; determine whether shared data commonly accessed by the first thread and the second thread is present; and stop establishment of coherency for a first cache memory corresponding to the first core and a second cache memory corresponding to the second core, upon determining that no shared data commonly accessed is present.
    Type: Grant
    Filed: February 24, 2015
    Date of Patent: July 12, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Takahisa Suzuki, Koichiro Yamashita, Hiromasa Yamauchi, Koji Kurihara
  • Patent number: 9378142
    Abstract: A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: June 28, 2016
    Assignee: Intel Corporation
    Inventors: Raj K. Ramanujan, Rajat Agarwal, Glenn J. Hinton
  • Patent number: 9373362
    Abstract: In accordance with the present disclosure, a system and method are herein disclosed for managing memory defects in an information handling system. In an information handling system, a first quantity of memory, such as RAM, may contain defective memory elements. A second quantity of memory is physically coupled to the first quantity of memory and is used to store a memory defect map containing information regarding the location of defective memory elements in the first quantity of memory. The memory defect map may then be referenced by the BIOS or the operating system to preclude use of regions of memory containing defective memory elements.
    Type: Grant
    Filed: August 14, 2007
    Date of Patent: June 21, 2016
    Assignee: DELL PRODUCTS L.P.
    Inventors: Mukund P. Khatri, Jimmy D. Pike, Forrest E. Norrod, Barry S. Travis
  • Patent number: 9367480
    Abstract: In one embodiment, a computing system includes a cache having one or more memories and a cache manager. The cache manager is able to receive a request to write data to a first portion of the cache, write the data to the first portion of the cache, update a first map corresponding to the first portion of the cache, receive a request to read data from the first portion of the cache, read from a storage communicatively linked to the computing system data according to the first map, and update a second map corresponding to the first portion of the cache. The cache manager may also be able to write data to the storage according to the first map.
    Type: Grant
    Filed: August 7, 2012
    Date of Patent: June 14, 2016
    Assignee: Dell Products L.P.
    Inventors: Scott David Peterson, Christopher August Shaffer, Phillip E. Krueger
  • Patent number: 9361222
    Abstract: Systems, methods and/or devices are used to enable storage drive life estimation. In one aspect, the method includes (1) determining two or more age criteria of a storage drive, and (2) determining a drive age of the storage drive in accordance with the two or more age criteria of the storage drive.
    Type: Grant
    Filed: July 17, 2014
    Date of Patent: June 7, 2016
    Assignee: SMART STORAGE SYSTEMS, INC.
    Inventors: James Fitzpatrick, Mark Dancho, James M. Higgins, James M. Kresse
  • Patent number: 9343177
    Abstract: An electronic apparatus that includes a controlled device with a plurality of control registers. A data bus is coupled between the controlled device and a processor, and an interface is configured to receive a plurality of portions of data read from or to be written to the plurality of control registers. The electronic apparatus also includes a correlation circuit configured to associate at least some of the plurality of portions of data with respective physical addresses of the plurality of control registers based on respective positions of the respective portions of data within the plurality.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: May 17, 2016
    Assignee: Apple Inc.
    Inventor: Michael Ross Malone
  • Patent number: 9342458
    Abstract: System and method for operating a solid state memory containing a memory space. The present invention provides a computerized system that includes a solid state memory having a memory space; a controller adapted to use a first portion of the memory space as a cache; and a garbage collector adapted to use a second portion of the memory space to collect garbage in the solid state memory. The controller is adapted to change a size of at least one of the first portion and the second portion of the memory space during operation of the solid state memory.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: May 17, 2016
    Assignee: International Business Machines Corporation
    Inventors: Xiao-Yu Hu, Nikolas Ioannou, Ioannis Koltsidas
  • Patent number: 9330000
    Abstract: Cache optimization. Cache access rates for tenants sharing the same cache are monitored to determine an expected cache usage. Factors related to cache efficiency or performance dictate occupancy constraints. A request to increase cache space allocated to a first tenant is received. If there is a second cache tenant for which reducing its cache size by the requested amount will not violate the occupancy constraints for the second cache tenant, its cache is decreased by the requested amount and allocated to satisfy the request. Otherwise, the first cache size is increased by allocating the amount of data storage space to the first cache tenant without deallocating the same amount of data storage space allocated to another cache tenant from among the plurality of cache tenants.
    Type: Grant
    Filed: November 17, 2015
    Date of Patent: May 3, 2016
    Assignee: International Business Machines Corporation
    Inventors: Gregory Chockler, Guy Laden, Benjamin M. Parees, Ymir Vigfusson
  • Patent number: 9330012
    Abstract: Cache optimization. Cache access rates for tenants sharing the same cache are monitored to determine an expected cache usage. Factors related to cache efficiency or performance dictate occupancy constraints. A request to increase cache space allocated to a first tenant is received. If there is a second cache tenant for which reducing its cache size by the requested amount will not violate the occupancy constraints for the second cache tenant, its cache is decreased by the requested amount and allocated to satisfy the request. Otherwise, the first cache size is increased by allocating the amount of data storage space to the first cache tenant without deallocating the same amount of data storage space allocated to another cache tenant from among the plurality of cache tenants.
    Type: Grant
    Filed: November 9, 2015
    Date of Patent: May 3, 2016
    Assignee: International Business Machines Corporation
    Inventors: Gregory Chockler, Guy Laden, Benjamin M. Parees, Ymir Vigfusson
  • Patent number: 9329998
    Abstract: An information processing apparatus includes: at least one access unit that issues a memory access request for a memory; an arbitration unit that arbitrates the memory access request issued from the access unit; a management unit that allows the access unit that is an issuance source of the memory access request according to a result of the arbitration made by the arbitration unit to perform a memory access to the memory; a processor that accesses the memory through at least one cache memory; and a timing adjusting unit that holds a process relating to the memory access request issued by the access unit for a holding time set in advance and cancels the holding of the process relating to the memory access request in a case where power of the at least one cache memory is turned off in the processor before the holding time expires.
    Type: Grant
    Filed: February 19, 2014
    Date of Patent: May 3, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Nobuyuki Koike, Toshihiro Miyamoto
  • Patent number: 9330753
    Abstract: Method and apparatus for sanitizing a memory using bit-inverted data. In accordance with various embodiments, a memory location is sanitized by sequential steps of reading a bit value stored in a selected memory cell of the memory, inverting the bit value, and writing the inverted bit value back to the selected memory cell. The memory cell may be erased between the reading and writing steps, as well as after the writing step. Random bit values may be generated and stored to the memory cell, and run-length limited constraints can be used to force bit-inversions.
    Type: Grant
    Filed: November 29, 2010
    Date of Patent: May 3, 2016
    Assignee: Seagate Technology LLC
    Inventors: Laszlo Hars, Donald Preston Matthews
  • Patent number: 9317434
    Abstract: Responsive to selecting a particular queue from among at least two queues to place an incoming event into within a particular entry from among multiple entries ordered upon arrival of the particular queue each comprising a separate collision vector, a memory address for the incoming event is compared with each queued memory address for each queued event in the other entries in the at least one other queue. Responsive to the memory address for the incoming event matching at least one particular queued memory address for at least one particular queued event in the at least one other queue, at least one particular bit is set in a particular collision vector for the particular entry in at least one bit position from among the bits corresponding with at least one row entry position of the at least one particular queued memory address within the other entries.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: April 19, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert M. Dinkjian, Robert S. Horton, Michael Y. Lee, Bill N. On
  • Patent number: 9311251
    Abstract: Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group ID. Also, each line may have a corresponding sticky state which indicates if the line should be retained in the cache. The sticky state is determined by an allocation hint provided by the requesting agent. When a cache line is allocated with the sticky state, the line will not be replaced by other cache lines fetched by any other group IDs.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: April 12, 2016
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Patent number: 9313271
    Abstract: Described herein is a system and method for maintaining cache coherency. The system and method may maintain coherency for a cache memory that is coupled to a plurality of primary storage devices. The system and method may write data to the cache memory and associate the data with a cache generation identification (ID). A different cache generation ID may be associated with each new set of data that is written to the cache memory. The cache generation ID may be written to the primary storage devices. A backup restore operation may be performed on one of the primary storage devices and a backup restore notification may be received. In response to the notification, the system and method may compare the cache generation ID with the generation ID stored on the restored primary storage device and invalidate data stored on the cache memory for the restored primary storage device.
    Type: Grant
    Filed: May 5, 2015
    Date of Patent: April 12, 2016
    Assignee: NetApp, Inc.
    Inventors: Narayan Venkat, David Lively, Kenny Speer
  • Patent number: 9304927
    Abstract: The disclosed embodiments relate to a method for dynamically changing a prefetching configuration in a computer system, wherein the prefetching configuration specifies how to change an ahead distance that specifies how many references ahead to prefetch for each stream. During operation of the computer system, the method keeps track of one or more stream lengths, wherein a stream is a sequence of memory references with a constant stride. Next, the method dynamically changes the prefetching configuration for the computer system based on observed stream lengths in a most-recent window of time.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: April 5, 2016
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Suryanarayana Murthy Durbhakula, Yuan C. Chou
  • Patent number: 9292435
    Abstract: A memory device includes a data block storing first data, and a log block storing second data that is an updated value of the first data. A spare area of the log block stores a first mapping table including mapping information between the first data and the second data.
    Type: Grant
    Filed: August 5, 2009
    Date of Patent: March 22, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae Don Lee, Gyu Sang Choi, Min Young Son, Choong Hun Lee
  • Patent number: 9280456
    Abstract: The present disclosure includes methods and apparatuses for mapping between program states and data patterns. One method includes mapping a data pattern to a number of program state combinations L corresponding to a group of memory cells configured to store a fractional number of data units per cell. The mapping can be based, at least partially, on a recursive expression performed in a number of operations, the number of operations based on a number of memory cells N within the group of memory cells and the number of program state combinations L.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: March 8, 2016
    Assignee: Micron Technology, Inc.
    Inventor: Chandra C. Varanasi
  • Patent number: 9235443
    Abstract: Systems and methods for cache optimization are provided. The method comprises monitoring cache access rate for a plurality of cache tenants sharing same cache mechanism having an amount of data storage space, wherein a first cache tenant having a first cache size is allocated a first cache space within the data storage space, and wherein a second cache tenant having a second cache size is allocated a second cache space within the data storage space. The method further comprises determining cache profiles for at least the first cache tenant and the second cache tenant according to data collected during the monitoring; analyzing the cache profiles for the plurality of cache tenants to determine an expected cache usage model for the cache mechanism; and analyzing the cache usage model and factors related to cache efficiency or performance for the one or more cache tenants to dictate one or more occupancy constraints.
    Type: Grant
    Filed: May 21, 2012
    Date of Patent: January 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Gregory Chockler, Guy Laden, Benjamin M. Parees, Ymir Vigfusson