Patents Examined by Craig Goldschmidt
  • Patent number: 8886884
    Abstract: The present invention is to provide a system for increasing read and write speeds of a hybrid storage unit, which includes a cache controller connected to the hybrid storage unit and a computer respectively, and stores forward and backward mapping tables each including a plurality of fields. The hybrid storage unit is composed of at least one regular storage unit (e.g., an HDD) having a plurality of regular sections corresponding to forward fields respectively, and at least one high-speed storage unit (e.g., an SSD) having a plurality of high-speed storage sections corresponding to backward fields respectively with higher read and write speeds than the regular storage unit. The cache controller can make the high-speed storage section corresponding to each backward field correspond to the regular section corresponding to the forward field, thus allowing the computer to rapidly read and write data from and into the hybrid storage unit.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: November 11, 2014
    Assignee: Waremax Electronics Corp.
    Inventors: Yu-Ting Chiu, Chih-Liang Yen, Cheng-Wei Yang
  • Patent number: 8874844
    Abstract: A system and method for buffering intermediate data in a processing pipeline architecture stores the intermediate data in a shared cache that is coupled between one or more pipeline processing units and an external memory. The shared cache provides storage that is used by multiple pipeline processing units. The storage capacity of the shared cache is dynamically allocated to the different pipeline processing units as needed, to avoid stalling the upstream units, thereby improving overall system throughput.
    Type: Grant
    Filed: December 2, 2008
    Date of Patent: October 28, 2014
    Assignee: NVIDIA Corporation
    Inventors: David B. Glasco, Peter B. Holmqvist, George R. Lynch, Patrick R. Marchand, James Roberts
  • Patent number: 8862832
    Abstract: Described are techniques for processing a request to access global memory. For a first processor included on a first of a plurality of boards connected by a fabric, a logical address is determined for a global memory location in a system global memory. A first physical address for the logical address is determined. It is determined whether the first physical address is included in a first global partition of the first board. If so, first processing is performed including updating a memory map to map a window of the first processor's logical address space to a physical memory segment located within the first global partition. Otherwise, if the first physical address is included in a second of the plurality of global partitions physically located on one of the plurality of boards other than said first board, second processing is performed to issue the request over the fabric.
    Type: Grant
    Filed: March 29, 2010
    Date of Patent: October 14, 2014
    Assignee: EMC Corporation
    Inventors: Jerome Cartmell, Zhi-Gang Liu, Steven McClure, Alesia Tringale
  • Patent number: 8862814
    Abstract: A method, an apparatus and an article of manufacture for placing at least one object at least one cache of a set of cooperating caching nodes with limited inter-node communication bandwidth. The method includes transmitting information from the set of cooperating caching nodes regarding object accesses to a placement computation component, determining object popularity distribution based on the object access information, and instructing the set of cooperating caching nodes of at least one object to cache, the at least one node at which each object is to be cached, and a manner in which the at least one cached object is to be shared among the at least one caching node based on the object popularity distribution and cache and object sizes such that a cumulative hit rate at the at least one cache is increased while a constraint on inter-node communication bandwidth is not violated.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: October 14, 2014
    Assignee: International Business Machines Corporation
    Inventors: Malolan Chetlur, Umamaheswari C. Devi, Shivkumar Kalyanaraman
  • Patent number: 8862826
    Abstract: A method and an apparatus for increasing capacity of cache directory in multi-processor systems, the apparatus comprising a plurality of processor nodes and a plurality of cache memory nodes and a plurality of main memory nodes.
    Type: Grant
    Filed: August 22, 2012
    Date of Patent: October 14, 2014
    Inventor: Conor Santifort
  • Patent number: 8850130
    Abstract: Disclosed is an improved approach for using advanced metadata to implement an architecture for managing I/O operations and storage devices for a virtualization environment. According to some embodiments, a Service VM is employed to control and manage any type of storage device, including directly attached storage in addition to networked and cloud storage. The advanced metadata is used to track data within the storage devices. A lock-free approach is implemented in some embodiments to access and modify the metadata.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: September 30, 2014
    Assignee: Nutanix, Inc.
    Inventors: Mohit Aron, Rishi Bhardwaj, Venkata Ranga Radhanikanth Guturi
  • Patent number: 8838876
    Abstract: Solid state storage devices and methods for flash translation layers are disclosed. In one such translation layer, a sector indication is translated to a memory location by a parallel unit look-up table is populated by memory device enumeration at initialization. Each table entry is comprised of communication channel, chip enable, logical unit, and plane for each operating memory device found. When the sector indication is received, a modulo function operates on entries of the look-up table in order to determine the memory location associated with the sector indication.
    Type: Grant
    Filed: October 13, 2008
    Date of Patent: September 16, 2014
    Assignee: Micron Technology, Inc.
    Inventor: Troy Manning
  • Patent number: 8832403
    Abstract: In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.
    Type: Grant
    Filed: June 8, 2010
    Date of Patent: September 9, 2014
    Assignee: International Business Machines Corporation
    Inventor: Martin Ohmacht
  • Patent number: 8832368
    Abstract: A slice manager module, in the operating system of a storage server, manages the virtual slicing of a mass storage device. The slice manager module receives a notification that a mass storage device has been added to an array of mass storage devices coupled to the storage system. The slice manager module reads header information in the mass storage device to determine a format of the mass storage device. If the mass storage device has not been previously sliced, the slice manager module virtually slices the mass storage device into a plurality of slices, where virtually slicing the mass storage device includes specifying an offset in the mass storage device where each of the plurality of slices is located.
    Type: Grant
    Filed: February 18, 2010
    Date of Patent: September 9, 2014
    Assignee: NetApp, Inc.
    Inventors: Susan M. Coatney, Stephen H. Strange, Douglas W. Coatney, Atul Goel
  • Patent number: 8832394
    Abstract: According to one embodiment, in response to a request to write a prime segment of a file system of a storage system having a plurality of storage units, one or more of the storage units are identified based on a prime segment write-map (PSWM). The PSWM includes information indicating which of the storage units to which a next prime should be written. The prime segment is then written in the one or more storage units identified from the PSWM, without writing the prime segment to a remainder of the storage units. The prime segment represents at least a portion of a prime that contains metadata representing a consistent point of data stored in the file system.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: September 9, 2014
    Assignee: EMC Corporation
    Inventors: Soumyadeb Mitra, Windsor W. Hsu
  • Patent number: 8825984
    Abstract: A technique for “zero copy” transitive communication of data between virtual address domains maintains a translation table hierarchy for each domain. The hierarchy of each domain includes a portion corresponding to every other domain in the system, where the portion for any particular domain begins at the same offset in the virtual address space of every domain. For each domain, there is a source hierarchy used only by the domain itself, which provides read/write access to the addresses in that domain; and a target hierarchy which provides read-only access to that domain, for use only when another domain is the target of IDC from that domain. Only one instance of the target hierarchy of each domain is provided, for all other domains as targets of IDC from that domain. For further space savings the source and target translation table hierarchies can be combined at all but the top hierarchy level.
    Type: Grant
    Filed: October 13, 2008
    Date of Patent: September 2, 2014
    Assignee: NetApp, Inc.
    Inventors: Kiran Srinivasan, Prashanth Radhakrishnan
  • Patent number: 8825940
    Abstract: Systems and methods for an architecture for optimizing execution of storage access commands is disclosed. The architecture enables a storage subsystem to execute storage access commands while satisfying one or more optimization criteria. The architecture thereby provides predictable execution times of storage access commands performed on a storage subsystem. In order to optimize execution of storage access commands, in one embodiment the host system sends a calibration request specifying a storage access command and an optimization criterion. In response to the calibration request, the storage subsystem determines the execution speeds of the storage access command within the non-volatile memory storage array and selects at least one region within the non-volatile memory storage array having the execution speed that satisfies the optimization criterion.
    Type: Grant
    Filed: December 2, 2008
    Date of Patent: September 2, 2014
    Assignee: SiliconSystems, Inc.
    Inventor: Mark S. Diggs
  • Patent number: 8799602
    Abstract: Aspects of the present invention relate to data migration and/or disaster recovery. One embodiment enables merging of bitmaps to allow for automation of the process of switching to a different target volume on the same storage subsystem without major interruption of data recovery capability and limited interruption of host I/O to the source volumes during the migration. In one approach, the migration of data onto a new target volume within the same storage subsystem as the original target volume is automated, without requiring the user to manually create or remove any new copy relationships.
    Type: Grant
    Filed: August 22, 2012
    Date of Patent: August 5, 2014
    Assignee: International Business Machines Corporation
    Inventors: Amy N. Blea, David R. Blea, Gregory E. McBride, John J. Wolfgang
  • Patent number: 8799609
    Abstract: A method, system and computer product for use in error handling comprising receiving, from a requester, a data storage configuration request comprising sub-tasks, determining, from a plurality of user levels, a first user level at which said data storage configuration request is made, each user level of said plurality of user levels being associated with a respective different level of abstraction with respect to processing performed in the data storage system for servicing the data storage configuration request, servicing said data storage configuration request, storing, in an error structure, the success of each sub-task of the data storage configuration request, based on the storing, recording in an error tree whether each sub-task of the data storage configuration request executed successfully, and based on the first user level, displaying a report of the status of the data storage configuration request as recorded in the error tree.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: August 5, 2014
    Assignee: EMC Corporation
    Inventors: Andreas L. Bauer, Francis P. Litterio, Joseph Gugliemino
  • Patent number: 8799595
    Abstract: Technologies for eliminating duplicate data provisions within a storage system supporting boot consolidation can efficiently identify duplicate data provisions within a data storage system and eliminate duplication by remapping duplicate provisions to point to the same physical storage space. Signatures of provisions within a storage system may be calculated and compared. Matching, or collisions, within the list of provision signatures can indicate candidate provisions for de-duplication. De-duplication territories may be provided as an indirect mapping mechanism in support of the remapping of duplicated provisions. Access statistics associated with provisions within a storage system may be collected. Access statistics can support the scheduling of de-duplication processes. Data de-duplication can support substantial storage space consolidation and significantly improve caching efficiency within a data storage system.
    Type: Grant
    Filed: August 28, 2008
    Date of Patent: August 5, 2014
    Assignee: American Megatrends, Inc.
    Inventors: Paresh Chatterjee, Ajit Narayanan, Sharon Enoch, Vijayarankan Muthirisavengopal
  • Patent number: 8793456
    Abstract: Aspects of the present invention relate to data migration and/or disaster recovery. One embodiment enables merging of bitmaps to allow for automation of the process of switching to a different target volume on the same storage subsystem without major interruption of data recovery capability and limited interruption of host I/O to the source volumes during the migration. In one approach, the migration of data onto a new target volume within the same storage subsystem as the original target volume is automated, without requiring the user to manually create or remove any new copy relationships.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: July 29, 2014
    Assignee: International Business Machines Corporation
    Inventors: Amy N. Blea, David R. Blea, Gregory E. McBride, John J. Wolfgang
  • Patent number: 8782334
    Abstract: A hybrid drive is disclosed comprising a head actuated over a disk comprising a plurality of data sectors. The hybrid drive further comprises a non-volatile semiconductor memory (NVSM) comprising a plurality of memory segments. A disk cache is defined comprising a first plurality of the data sectors, and a non-cache area of the disk is defined comprising a second plurality of the data sectors. When a write command is received from a host, data is written to the disk cache, and under certain conditions, the data is copied from the disk cache to the NVSM.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: July 15, 2014
    Assignee: Western Digital Technologies, Inc.
    Inventors: William B. Boyle, Curtis E. Stevens, Kenny T. Coker
  • Patent number: 8782374
    Abstract: Methods and apparatus for inclusion of TLB (translation look-aside buffer) in processor micro-op caches are disclosed. Some embodiments for inclusion of TLB entries have micro-op cache inclusion fields, which are set responsive to accessing the TLB entry. Inclusion logic may the flush the micro-op cache or portions of the micro-op cache and clear corresponding inclusion fields responsive to a replacement or invalidation of a TLB entry whenever its associated inclusion field had been set. Front-end processor state may also be cleared and instructions refetched when replacement resulted from a TLB miss.
    Type: Grant
    Filed: December 2, 2008
    Date of Patent: July 15, 2014
    Assignee: Intel Corporation
    Inventors: Lihu Rappoport, Chen Koren, Franck Sala, Oded Lempel, Ido Ouziel, Ron Gabor, Gregory Pribush, Lior Libis
  • Patent number: 8782323
    Abstract: A method for accessing data stored in a distributed storage system is provided. The method comprises determining whether a copy of first data is stored in a distributed cache system, where data in the distributed cache system is stored in free storage space of the distributed storage system; accessing the copy of the first data from the distributed cache system if the copy of the first data is stored in a first data storage medium at a first computing system in a network; and requesting a second computing system in the network to access the copy of the first data from the distributed cache system if the copy of the first data is stored in a second data storage medium at the second computing system. If the copy of the first data is not stored in the distributed cache system, the first data is accessed from the distributed storage system.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: July 15, 2014
    Assignee: International Business Machines Corporation
    Inventors: Alex Glikson, Shay Goikhman, Benny Rochwerger
  • Patent number: 8775767
    Abstract: A method, and associated system, for allocating memory to a first pipeline that includes a sequence of filters. Each filter is configured to execute a process specific to each filter, receive input data, and generate output data. The output data from each filter, except the last filter in the sequence, serves as the input data to the next filter in the sequence. An optimum memory capacity is allocated to the first pipeline if possible. Otherwise, a guaranteed memory bandwidth is allocated to the first pipeline if possible. Otherwise, extra memory currently allocated to a second pipeline is currently released if the second pipeline not currently performing processing or subsequently released when the second pipeline subsequently completes performing processing that is currently being performed, followed by allocating the extra memory to the first pipeline.
    Type: Grant
    Filed: August 11, 2011
    Date of Patent: July 8, 2014
    Assignee: International Business Machines Corporation
    Inventors: Atsushi Fukuda, Masakuni Okada, Kazuto Yamafuji, Takashi Yonezawa