Patents Examined by Jared Rutz
-
Patent number: 9336090Abstract: Storage apparatus, in response to write command specifying write destination with regards to multiple virtual areas, allocates a free real area of multiple real areas based on storage devices to a write-destination virtual area, of the multiple virtual areas, to which the write destination belongs, and writes write-target data conforming to the write command to the allocated real area. The storage apparatus, where a first write command has been received subsequent to a snapshot acquisition time point, erases an allocation of a first real area to a first virtual area to which the write destination specified in the first write command belongs, allocates the first real area to a free second virtual area to which a real area has not been allocated, allocates a free second real area to the first virtual area, writes write-target data conforming to the first write command to the second real area.Type: GrantFiled: October 10, 2012Date of Patent: May 10, 2016Assignee: Hitachi, Ltd.Inventors: Yudai Takayama, Yuko Matsui
-
Patent number: 9336855Abstract: Methods and devices for refreshing a dynamic memory device, (e.g., DRAM) to eliminate unnecessary page refresh operations. A value in a lookup table for the page may indicate whether valid data including all zeros is present in the page. When the page includes valid data of all zeros, the lookup table value may be set so that refresh, memory read, write and clear accesses of the page may be inhibited and a valid value may be returned. A second lookup table may contain a second value indicating whether a page has been accessed by a page read or write during the page refresh interval. A page refresh, by issuing an ACT?PRE command pair, and a page address may be performed according to the page refresh interval when the second value indicates that page access has not occurred.Type: GrantFiled: May 14, 2013Date of Patent: May 10, 2016Assignee: QUALCOMM IncorporatedInventors: Haw-Jing Lo, Dexter Chun
-
Patent number: 9329998Abstract: An information processing apparatus includes: at least one access unit that issues a memory access request for a memory; an arbitration unit that arbitrates the memory access request issued from the access unit; a management unit that allows the access unit that is an issuance source of the memory access request according to a result of the arbitration made by the arbitration unit to perform a memory access to the memory; a processor that accesses the memory through at least one cache memory; and a timing adjusting unit that holds a process relating to the memory access request issued by the access unit for a holding time set in advance and cancels the holding of the process relating to the memory access request in a case where power of the at least one cache memory is turned off in the processor before the holding time expires.Type: GrantFiled: February 19, 2014Date of Patent: May 3, 2016Assignee: FUJITSU LIMITEDInventors: Nobuyuki Koike, Toshihiro Miyamoto
-
Patent number: 9329978Abstract: The present disclosure describes methods, systems, and computer program products for measuring strength of a unit test. One computer-implemented method includes receiving software unit source code associated with a unit test, analyzing a line of the software unit source code for removability, initiating, by operation of a computer, modification of the software unit source code to remove the line of the software unit source code and create a modified software unit, initiating execution of the modified software unit using the unit test, determining success or failure of a unit test execution, and analyzing a next line of the software unit source code for removability.Type: GrantFiled: August 20, 2013Date of Patent: May 3, 2016Assignee: SAP Portals Israel LtdInventor: Yotam Kadishay
-
Patent number: 9330753Abstract: Method and apparatus for sanitizing a memory using bit-inverted data. In accordance with various embodiments, a memory location is sanitized by sequential steps of reading a bit value stored in a selected memory cell of the memory, inverting the bit value, and writing the inverted bit value back to the selected memory cell. The memory cell may be erased between the reading and writing steps, as well as after the writing step. Random bit values may be generated and stored to the memory cell, and run-length limited constraints can be used to force bit-inversions.Type: GrantFiled: November 29, 2010Date of Patent: May 3, 2016Assignee: Seagate Technology LLCInventors: Laszlo Hars, Donald Preston Matthews
-
Patent number: 9330012Abstract: Cache optimization. Cache access rates for tenants sharing the same cache are monitored to determine an expected cache usage. Factors related to cache efficiency or performance dictate occupancy constraints. A request to increase cache space allocated to a first tenant is received. If there is a second cache tenant for which reducing its cache size by the requested amount will not violate the occupancy constraints for the second cache tenant, its cache is decreased by the requested amount and allocated to satisfy the request. Otherwise, the first cache size is increased by allocating the amount of data storage space to the first cache tenant without deallocating the same amount of data storage space allocated to another cache tenant from among the plurality of cache tenants.Type: GrantFiled: November 9, 2015Date of Patent: May 3, 2016Assignee: International Business Machines CorporationInventors: Gregory Chockler, Guy Laden, Benjamin M. Parees, Ymir Vigfusson
-
Patent number: 9330000Abstract: Cache optimization. Cache access rates for tenants sharing the same cache are monitored to determine an expected cache usage. Factors related to cache efficiency or performance dictate occupancy constraints. A request to increase cache space allocated to a first tenant is received. If there is a second cache tenant for which reducing its cache size by the requested amount will not violate the occupancy constraints for the second cache tenant, its cache is decreased by the requested amount and allocated to satisfy the request. Otherwise, the first cache size is increased by allocating the amount of data storage space to the first cache tenant without deallocating the same amount of data storage space allocated to another cache tenant from among the plurality of cache tenants.Type: GrantFiled: November 17, 2015Date of Patent: May 3, 2016Assignee: International Business Machines CorporationInventors: Gregory Chockler, Guy Laden, Benjamin M. Parees, Ymir Vigfusson
-
Patent number: 9329834Abstract: An electronic apparatus may be provided that includes a processor to perform operations, and a memory subsystem including a plurality of parallel memory banks to store a two-dimensional (2D) array of data using a shifted scheme. Each memory bank may include at least two elements per bank word.Type: GrantFiled: December 28, 2012Date of Patent: May 3, 2016Assignee: Intel CorporationInventors: Radomir Jakovljevic, Aleksandar Beric, Edwin Van Dalen, Dragan Milicev
-
Patent number: 9330004Abstract: The present invention provides a data processing method based on a cache node group for data caching, where each cache node in the group includes a local replacement-allowable data storage space for storing data accessed by a local client and a collaborative replacement-allowable data storage space for storing data content accessed by a non-local client. By using the data processing method to process data content stored in the local replacement-allowable data storage space and the collaborative replacement-allowable data storage space of the cache node, the clients can obtain data more accurately and directly during access to the cache node, thereby meeting different requirements for local optimization of the cache node.Type: GrantFiled: December 2, 2013Date of Patent: May 3, 2016Assignee: Huawei Technologies Co., Ltd.Inventor: Youshui Long
-
Patent number: 9323661Abstract: A memory system has a storage unit having two or more parallel read/write processing elements and non-volatile data recording areas for a logical block divided into a plurality of logical pages, and a control unit that generates log information for each unit of data written into the recording areas, determines for each logical page a log information recording area from a group of recording areas of the logical page, and controls the parallel operation elements to write the log information generated for a logical page into the log information recording area of the logical page and the data of the logical page into the other recording areas of the group of recording areas of the logical page.Type: GrantFiled: February 27, 2013Date of Patent: April 26, 2016Assignee: Kabushiki Kaisha ToshibaInventors: Akinori Harasawa, Yoko Masuo
-
Patent number: 9323678Abstract: In one embodiment, the present invention includes a method for identifying a memory request corresponding to a load instruction as a critical transaction if an instruction pointer of the load instruction is present in a critical instruction table associated with a processor core, sending the memory request to a system agent of the processor with a critical indicator to identify the memory request as a critical transaction, and prioritizing the memory request ahead of other pending transactions responsive to the critical indicator. Other embodiments are described and claimed.Type: GrantFiled: December 30, 2011Date of Patent: April 26, 2016Assignee: Intel CorporationInventors: Amit Kumar, Sreenivas Subramoney
-
Patent number: 9317435Abstract: Described herein is a system and method for an efficient cache warm-up. The system and method may copy data blocks from a primary storage device to a cache memory device. The system and method may identify a subset of data blocks stored on the primary storage device as candidate data blocks for copying to the cache memory device during a cache warm-up period. A cost effectiveness for copying the candidate data blocks to the cache memory device may be determined. In some embodiments, the cost effectiveness may be calculated based on one or more latency values associated with the primary storage device and the cache memory device. The candidate data blocks may be copied to the cache memory device based on the cost effectiveness.Type: GrantFiled: December 18, 2012Date of Patent: April 19, 2016Assignee: NetApp, Inc.Inventors: Lakshmi Narayanan Bairavasundaram, Gokul Soundararajan, Mark Walter Storer, Yiying Zhang
-
Managing out-of-order memory command execution from multiple queues while maintaining data coherency
Patent number: 9317434Abstract: Responsive to selecting a particular queue from among at least two queues to place an incoming event into within a particular entry from among multiple entries ordered upon arrival of the particular queue each comprising a separate collision vector, a memory address for the incoming event is compared with each queued memory address for each queued event in the other entries in the at least one other queue. Responsive to the memory address for the incoming event matching at least one particular queued memory address for at least one particular queued event in the at least one other queue, at least one particular bit is set in a particular collision vector for the particular entry in at least one bit position from among the bits corresponding with at least one row entry position of the at least one particular queued memory address within the other entries.Type: GrantFiled: August 3, 2015Date of Patent: April 19, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Robert M. Dinkjian, Robert S. Horton, Michael Y. Lee, Bill N. On -
Patent number: 9317209Abstract: The invention is directed towards a system and method that utilizes external memory devices to cache sectors from a rotating storage device (e.g., a hard drive) to improve system performance. When an external memory device (EMD) is plugged into the computing device or onto a network in which the computing device is connected, the system recognizes the EMD and populates the EMD with disk sectors. The system routes I/O read requests directed to the disk sector to the EMD cache instead of the actual disk sector. The use of EMDs increases performance and productivity on the computing device systems for a fraction of the cost of adding memory to the computing device.Type: GrantFiled: October 31, 2014Date of Patent: April 19, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexander Kirshenbaum, Cenk Ergan, Michael R. Fortin, Robert L. Reinauer
-
Patent number: 9317423Abstract: The first storage apparatus provides a primary logical volume, and the second storage apparatus has a secondary logical volume. When the first storage apparatus receives a write command to the primary logical volume, a package processor in a flash package allocates first physical area in the flash memory chip to first cache logical area for write data and stores the write data to the allocated first physical area. And when the package processor receives journal data creation command form the processor, allocates the first physical area to second journal area for journal data without storing journal data corresponding to the write data.Type: GrantFiled: January 7, 2013Date of Patent: April 19, 2016Assignee: HITACHI, LTD.Inventors: Kohei Tatara, Akira Yamamoto, Junji Ogawa
-
Patent number: 9313271Abstract: Described herein is a system and method for maintaining cache coherency. The system and method may maintain coherency for a cache memory that is coupled to a plurality of primary storage devices. The system and method may write data to the cache memory and associate the data with a cache generation identification (ID). A different cache generation ID may be associated with each new set of data that is written to the cache memory. The cache generation ID may be written to the primary storage devices. A backup restore operation may be performed on one of the primary storage devices and a backup restore notification may be received. In response to the notification, the system and method may compare the cache generation ID with the generation ID stored on the restored primary storage device and invalidate data stored on the cache memory for the restored primary storage device.Type: GrantFiled: May 5, 2015Date of Patent: April 12, 2016Assignee: NetApp, Inc.Inventors: Narayan Venkat, David Lively, Kenny Speer
-
Patent number: 9311007Abstract: An integrated circuit has registers which it can place in a low power condition in which their state is lost; a power domain capable of reading the registers, the current operating mode of the domain being dependent on the state of the registers; a memory; and a configuration controller for configuring the registers. The configuration controller has access to a set of mappings. Each mapping indicates for bits represented in the memory the state of other bits storable in the registers. The configuration controller is configured to perform a register configuration operation by reading bits from the memory and populating the registers with a corresponding bit state.Type: GrantFiled: December 23, 2013Date of Patent: April 12, 2016Assignee: QUALCOMM TECHNOLOGIES INTERNATIONAL, LTD.Inventor: Paul Simon Hoayun
-
Patent number: 9311164Abstract: A system and method for ballooning with assigned devices includes inflating a memory balloon, determining whether a first memory page is locked based on information associated with the first memory page, when the first memory page is locked unlocking the first memory page and removing first memory addresses associated with the first memory page from management by an input/output memory management unit (IOMMU), and reallocating the first memory page. The first memory page is associated with a first assigned device.Type: GrantFiled: February 14, 2013Date of Patent: April 12, 2016Assignee: RED HAT ISRAEL, LTD.Inventors: Paolo Bonzini, Michael Tsirkin
-
Patent number: 9311251Abstract: Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group ID. Also, each line may have a corresponding sticky state which indicates if the line should be retained in the cache. The sticky state is determined by an allocation hint provided by the requesting agent. When a cache line is allocated with the sticky state, the line will not be replaced by other cache lines fetched by any other group IDs.Type: GrantFiled: August 27, 2012Date of Patent: April 12, 2016Assignee: Apple Inc.Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
-
Patent number: 9304927Abstract: The disclosed embodiments relate to a method for dynamically changing a prefetching configuration in a computer system, wherein the prefetching configuration specifies how to change an ahead distance that specifies how many references ahead to prefetch for each stream. During operation of the computer system, the method keeps track of one or more stream lengths, wherein a stream is a sequence of memory references with a constant stride. Next, the method dynamically changes the prefetching configuration for the computer system based on observed stream lengths in a most-recent window of time.Type: GrantFiled: August 27, 2012Date of Patent: April 5, 2016Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Suryanarayana Murthy Durbhakula, Yuan C. Chou