Patents Examined by Jasmine Song
-
Patent number: 9946460Abstract: A method for generating a virtual volume (VV) in a storage system architecture. The architecture comprises a host and one or more disk array subsystems. Each subsystem comprises a storage controller. One or more of the subsystems comprises a physical storage device (PSD) array. The method comprises the following steps: mapping the PSD array into a plurality of media extents (MEs), each of the MEs comprises a plurality of sections; providing a virtual pool (VP) to implement a section cross-referencing function, wherein a section index (SI) of each of the sections contained in the VP is defined by the VP to cross-reference VP sections to physical ME locations; providing a conversion method or procedure or function for mapping VP capacity into to a VV; and presenting the VV to the host. A storage subsystem and a storage system architecture performing the method are also provided.Type: GrantFiled: March 5, 2017Date of Patent: April 17, 2018Assignee: INFORTREND TECHNOLOGY, INC.Inventors: Michael Gordon Schnapp, Ching-Hua Fang
-
Patent number: 9946478Abstract: A memory managing method, a memory control circuit unit and a memory storage apparatus are provided. The method includes: setting a read-disturb threshold for each of a plurality of physical erasing units; adjusting the read-disturb threshold of a first physical erasing unit according to state information of a rewritable non-volatile memory module; and performing a read-disturb prevention operation according to the read-disturb threshold of the first physical erasing unit.Type: GrantFiled: May 4, 2016Date of Patent: April 17, 2018Assignee: PHISON ELECTRONICS CORP.Inventor: Kok-Yong Tan
-
Patent number: 9934157Abstract: A system and methods for migrating a virtual machine (VM). In one embodiment, a hypervisor receives a request to migrate the contents of a memory of a source VM in a first physical memory area to a destination VM in a second physical memory area, where the first and second physical memory areas are disjoint. The hypervisor executes the destination VM in response to the request, and detects an access of a page of memory of the destination VM. The hypervisor determines, in view of a data structure maintained by a guest operating system executing in the destination VM, that a first page of a memory of the source VM in the first physical memory area is currently in use by the destination VM.Type: GrantFiled: November 25, 2015Date of Patent: April 3, 2018Assignee: Red Hat Israel, Ltd.Inventors: Michael Tsirkin, David A. Gilbert
-
Patent number: 9916095Abstract: Methods and systems are provided for fork-safe memory allocation from memory-mapped files. A child process may be provided a memory mapping at a same virtual address as a parent process, but the memory mapping may map the virtual address to a different location within a file than for the parent process.Type: GrantFiled: March 21, 2016Date of Patent: March 13, 2018Assignee: Kove IP, LLCInventors: Timothy A. Stabrawa, Andrew S. Poling, Zachary A. Cornelius, Jesse I. Taylor, John Overton
-
Patent number: 9898438Abstract: A memory system includes a transmitter and a receiver. The transmitter is configured to transmit a data signal corresponding to a first symbol lock pattern and a data burst via an interface. The data burst includes a first data and a subsequent data. The receiver is configured to receive the data signal, to detect the first symbol lock pattern based on the received data signal, and to find the first data of the data burst according to the detected first symbol lock pattern.Type: GrantFiled: October 9, 2015Date of Patent: February 20, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hye-Ran Kim, Tae-Young Oh
-
Patent number: 9886210Abstract: Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to a hardware-based processing node of an object memory fabric.Type: GrantFiled: May 31, 2016Date of Patent: February 6, 2018Assignee: ULTRATA, LLCInventors: Steven J. Frank, Larry Reback
-
Patent number: 9880738Abstract: A storage controller configures a plurality of storage tiers. A sub-unit of a storage unit is maintained in a selected storage tier of the plurality of storage tiers, for at least a predetermined duration of time subsequent to an input/output (I/O) request for the sub-unit.Type: GrantFiled: February 10, 2016Date of Patent: January 30, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bradley S. Powers, Gail A. Spear, Teena N. Werley
-
Patent number: 9881169Abstract: A data processing system may have a strict separation of processor tasks and data categories, wherein processor tasks are separated into software loading and initialization (loading processor) and data processing (main processor) and data categories are separated into address data, instructions, internal function data, target data of the main processor and target data of the loading processor. In this way, protection is provided against malware, irrespective of the transmission medium and of the type of malware, and also against future malware and without performance losses in the computer system.Type: GrantFiled: March 27, 2014Date of Patent: January 30, 2018Inventor: Friedhelm Becker
-
Patent number: 9880934Abstract: Methods and systems are presented for allocating CPU cycles among processes in a storage system. One method includes operations for maintaining segments in a first memory, each segment including blocks, and for maintaining a block temperature for each block in a second memory. The first memory is a read-cache where one segment is written at a time, and each block is readable from the first memory without reading the corresponding complete segment. The block temperature is based on the frequency of access to the respective block, and a segment temperature is based on the block temperature of its blocks. Additionally, the segment with the lowest segment temperature is selected for eviction from the second memory, and blocks in the selected segment with a block temperature greater than a threshold temperature are identified. The selected segment is evicted, and a segment with the identified blocks is written to the first memory.Type: GrantFiled: September 2, 2016Date of Patent: January 30, 2018Assignee: Hewlett Packard Enterprise Development LPInventors: Pradeep Shetty, Sandeep Karmarkar, Senthil Kumar Ramamoorthy, Umesh Maheshwari, Vanco Buca
-
Patent number: 9875046Abstract: Data is relocated, based on an intelligent data placement algorithm, from a first storage location to a second storage location in a disk storage system. A data placement record is generated including a virtual disk location associated with the data, the second storage location, and a first sequence value. The first sequence value indicates relative sequence when compared to other sequence values. The data placement record is written to a first record location on a first tape cartridge loaded in a tape drive. The data placement records are used with data records to restore data to disk storage from tape backup.Type: GrantFiled: October 20, 2016Date of Patent: January 23, 2018Assignee: International Business Machines CorporationInventors: Joshua J. Crawford, Paul A. Jennas, II, Jason L. Peipelman, Matthew J. Ward
-
Patent number: 9875163Abstract: An operating state of each of a plurality of storage units of a storage system is periodically monitored, including a storage capacity, a throughput, and overlap of clients associated with the storage units. In response to a request to redistribute data from a first of the storage units to another storage unit, a cost factor for each of remaining storage units to relocate the data of the first storage unit to each of the remaining storage units is determined. A cost factor of each of the remaining storage units is determined based on at least one of the storage capacity, the throughput, or the overlap of clients of the storage unit. A second of the storage units having a lowest cost factor amongst the remaining storage units is selected. At least a portion of the data of the first storage unit is migrated to the second storage unit.Type: GrantFiled: August 9, 2016Date of Patent: January 23, 2018Assignee: EMC IP Holding Company LLCInventors: Frederick Douglis, R. Hugo Patterson, Philip Shilane
-
Patent number: 9870814Abstract: A detection circuit is provided for a particular group of memory cells in a memory device, where the detection circuit is to be updated in response to at least one access of data and at least one neighboring group of memory cells. The particular group of memory cells is refreshed in response to an indication from the detection circuit, where the indication indicates presence of potential disturbance of the particular group of memory cells.Type: GrantFiled: October 22, 2012Date of Patent: January 16, 2018Assignee: Hewlett Packard Enterprise Development LPInventor: Darel N. Emmot
-
Patent number: 9864681Abstract: Apparatus and method embodiments for dynamically allocating cache space in a multi-threaded execution environment are disclosed. In some embodiments, a processor includes a cache shared by each of a plurality of processor cores and/or each of a plurality of threads executing on the processor. The processor further includes a cache allocation circuit configured to dynamically allocate space in the cache provided to each of the plurality of processor cores based on their respective usage patterns. The cache allocation unit may track cache usage by each of the processor cores/threads using subsets of usage bits and counters configured to update states of the usage bits. The cache allocation circuit may track the usage of cache space by the processor cores/threads and may allocate more space to those that exhibit more usage of the cache.Type: GrantFiled: December 1, 2016Date of Patent: January 9, 2018Assignee: ADVANCED MICRO DEVICES, INC.Inventor: William L. Walker
-
Patent number: 9858205Abstract: A system includes a cache and a cache-management component. The cache includes a plurality of cache lines that correspond to a plurality of device endpoints. The cache-management component is configured to receive a transfer request block (TRB) for data transfer involving a device endpoint. In response to a determination that the cache both (i) does not include a cache line assigned to the device endpoint and (ii) does not include an empty cache line, the cache-management component assigns, to the device endpoint, a last cache line that includes a most recently received TRB in the cache, and stores the received TRB to the last cache line.Type: GrantFiled: November 4, 2016Date of Patent: January 2, 2018Assignee: MARVELL WORLD TRADE LTD.Inventors: Xingzhi Wen, Yu Hong, Hefei Zhu, Qunzhao Tian, Jeanne Q. Cai, Shaori Guo
-
Patent number: 9857985Abstract: A mechanism is provided for providing information about fragmentation of a file on a sequential access medium by a computer system is disclosed. An actual time for reading the file recorded on the sequential access medium is estimated based on a physical position of the file. A total length of the file on the sequential access medium is calculated based on a physical length of each data piece constituting the file. An expected time for reading the file by assuming that the file is rewritten continuously is estimated based on the total length of the file. Information about the fragmentation of the file is then provided based on the actual time and the expected time.Type: GrantFiled: November 30, 2015Date of Patent: January 2, 2018Assignee: International Business Machines CorporationInventors: Tohru Hasegawa, Hiroshi Itagaki, Sosuke Matsui, Shinsuke Mitsuma, Tsuyoshi Miyamura, Noriko Yamamoto
-
Patent number: 9836229Abstract: A N-way merge technique efficiently updates metadata in accordance with a N-way merge operation managed by a volume layer of a storage input/output (I/O) stack executing on one or more nodes of a cluster. The metadata is embodied as mappings from logical block addresses (LBAs) of a logical unit (LUN) accessible by a host to durable extent keys, and is organized as a multi-level dense tree. The mappings are organized such that a higher level of the dense tree contains more recent mappings than a next lower level, i.e., the level immediately below. The N-way merge operation is an efficient (i.e., optimized) way of updating the volume metadata mappings of the dense tree by merging the mapping content of all three levels in a single iteration, as opposed to merging the content of the first level with the content of the second level in a first iteration of a two-way merge operation and then merging the results of the first iteration with the content of the third level in a second iteration of the operation.Type: GrantFiled: November 18, 2014Date of Patent: December 5, 2017Assignee: NetApp, Inc.Inventors: Janice D'Sa, Ling Zheng, Blake H. Lewis
-
Patent number: 9836277Abstract: A Processing-In-Memory (PIM) model in which computations related to the POPCOUNT and logical bitwise operations are implemented within a memory module and not within a host Central Processing Unit (CPU). The in-memory executions thus eliminate the need to shift data from large bit vectors throughout the entire system. By off-loading the processing of these operations to the memory, the redundant data transfers over the memory-CPU interface are greatly reduced, thereby improving system performance and energy efficiency. A controller and a dedicated register in the logic die of the memory module operate to interface with the host and provide in-memory executions of popcounting and logical bitwise operations requested by the host. The PIM model of the present disclosure thus frees up the CPU for other tasks because many real-time analytics tasks can now be executed within a PIM-enabled memory itself. The memory module may be a Three Dimensional Stack (3DS) memory or any other semiconductor memory.Type: GrantFiled: April 15, 2015Date of Patent: December 5, 2017Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Zvika Guz, Liang Yin
-
Patent number: 9830083Abstract: A memory chip, a memory system, and a method of accessing the memory chip. The memory chip includes a substrate, a first storage unit, and a second storage unit. The first storage unit includes a plurality of first memory cells may have a first storage capacity of 2n. The plurality of first memory cells may be configured to activate in response to a first selection signal. The second storage unit includes a plurality of second memory cells and may have a second storage capacity of 2n+1. The plurality of second memory cells may be configured to activate in response to a second selection signal.Type: GrantFiled: September 18, 2015Date of Patent: November 28, 2017Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chul-sung Park, Joo-sun Choi
-
Patent number: 9823841Abstract: A definition is received of at least one data object and a compute object from a host at a storage compute device. A first key is associated with the at least one data object and a second key is associated with the compute object. A command is received from the host to perform a computation that links the first and second keys. The computation is defined by the compute object and acts on the data object. The computation is performed via the storage compute device using the compute object and the data object in response to the command.Type: GrantFiled: September 15, 2014Date of Patent: November 21, 2017Assignee: SEAGATE TECHNOLOGY LLCInventors: David Scott Ebsen, Ryan James Goss, Jeffrey L. Whaley, Dana Simonson
-
Patent number: 9824011Abstract: A method and an apparatus for processing data and a computer system are provided. The method includes copying a shared virtual memory page to which a first process requests access into off-chip memory of a computing node, and using the shared virtual memory page copied into the off-chip memory as a working page of the first process; and before the first process performs a write operation on the working page, creating, in on-chip memory of the computing node, a backup page of the working page, so as to back up original data of the working page. Before a write operation is performed on a working page, page data is backed up in the on-chip memory, so as to ensure data consistency when multiple processes perform an operation on a shared virtual memory page while accessing off-chip memory as less as possible and improving a speed of a program.Type: GrantFiled: October 12, 2015Date of Patent: November 21, 2017Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Kingtin Lam, Jinghao Shi, Cho-li Wang, Wangbin Zhu