Patents Assigned to OCARINA NETWORKS
  • Publication number: 20110125722
    Abstract: Mechanisms are provided for performing efficient compression and deduplication of data segments. Compression algorithms are learning algorithms that perform better when data segments are large. Deduplication algorithms, however, perform better when data segments are small, as more duplicate small segments are likely to exist. As an optimizer is processing and storing data segments, the optimizer applies the same compression context to compress multiple individual deduplicated data segments as though they are one segment. By compressing deduplicated data segments together within the same context, data reduction can be improved for both deduplication and compression. Mechanisms are applied to compensate for possible performance degradation.
    Type: Application
    Filed: November 23, 2009
    Publication date: May 26, 2011
    Applicant: Ocarina Networks
    Inventors: Goutham Rao, Murali Bashyam, Vinod Jayaraman
  • Publication number: 20110125719
    Abstract: Mechanisms are provided for efficiently detecting segments for deduplication. Data is analyzed to determine file types and file components. File types such as images may have optimal data segment boundaries set at the file boundaries. Other file types such as container files are delayered to extract objects to set optimal data segment boundaries based on file type or based on the boundaries of the individual objects. Storage of unnecessary information is minimized in a deduplication dictionary while allowing for effective deduplication.
    Type: Application
    Filed: November 23, 2009
    Publication date: May 26, 2011
    Applicant: Ocarina Networks
    Inventor: Vinod Jayaraman
  • Publication number: 20110082840
    Abstract: Mechanisms are provided for efficiently determining commonality in a deduplicated data set in a scalable manner regardless of the number of deduplicated files or the number of stored segments. Information is generated and maintained during deduplication to allow scalable and efficient determination of data segments shared in a particular file, other files sharing data segments included in a particular file, the number of files sharing a data segment, etc. Data need not be expanded or uncompressed. Deduplication processing can be validated and verified during commonality detection.
    Type: Application
    Filed: October 6, 2009
    Publication date: April 7, 2011
    Applicant: Ocarina Networks
    Inventor: Vinod Jayaraman
  • Publication number: 20110071989
    Abstract: A system provides file aware block level deduplication in a system having multiple clients connected to a storage subsystem over a network such as an Internet Protocol (IP) network. The system includes client components and storage subsystem components. Client components include a walker that traverses the namespace looking for files that meet the criteria for optimization, a file system daemon that rehydrates the files, and a filter driver that watches all operations going to the file system. Storage subsystem components include an optimizer resident on the nodes of the storage subsystem. The optimizer can use idle processor cycles to perform optimization. Sub-file compression can be performed at the storage subsystem.
    Type: Application
    Filed: August 17, 2010
    Publication date: March 24, 2011
    Applicant: OCARINA NETWORKS, INC.
    Inventors: Mike Wilson, Parthiban Munusamy, Carter George, Murali Bashyam, Vinod Jayaraman, Goutham Rao
  • Publication number: 20110066628
    Abstract: Mechanisms are provided for efficiently improving a dictionary used for data deduplication. Dictionaries are used to hold hash key and location pairs for deduplicated data. Strong hash keys prevent collisions but weak hash keys are more computation and storage efficient. Mechanisms are provided to use both a weak hash key and a strong hash key. Weak hash keys and corresponding location pairs are stored in an improved dictionary while strong hash keys are maintained with the deduplicated data itself. The need for having uniqueness from a strong hash function is balanced with the deduplication dictionary space savings from a weak hash function.
    Type: Application
    Filed: August 17, 2010
    Publication date: March 17, 2011
    Applicant: OCARINA NETWORKS, INC.
    Inventor: Vinod Jayaraman
  • Patent number: 7864083
    Abstract: Embodiments described herein relate to compression and decompression of data consisting of a one dimensional time series of floating point numbers. A compressor may comprise a lossless stage and in some embodiments a lossy stage in addition to the lossless stage. The lossy stage quantizes the data by discarding some of the least significant bits as specified by the user. The lossless stage uses a context mixing algorithm with two bit-wise predictive models whose predictions are combined and fed to an arithmetic coder. One model is a direct context model using the most significant bits of prior numeric samples as context. The other model is the output of an adaptive filter, in which the approximate predicted numeric value is used as context to model the actual value. A corresponding decompressor uses the same lossless model with the arithmetic coder replaced by an arithmetic decoder.
    Type: Grant
    Filed: May 21, 2009
    Date of Patent: January 4, 2011
    Assignee: Ocarina Networks, Inc.
    Inventor: Matthew Mahoney
  • Publication number: 20100094813
    Abstract: A data de-duplication system is used with network attached storage and serves to reduce data duplication and file storage costs. Techniques utilizing both symlinks and hardlinks ensure efficient deletion file/data cleanup and avoid data loss in the event of crashes.
    Type: Application
    Filed: October 5, 2009
    Publication date: April 15, 2010
    Applicant: OCARINA NETWORKS
    Inventors: Eric Brueggemann, Goutham Rao, Mark Taylor, Murali Bashyam, Hui Huang
  • Publication number: 20100088277
    Abstract: Embodiments deploy delayering techniques, and the relationships between successive versions of a rich-media file become apparent. With this, modified rich-media files suddenly present far smaller storage overhead as compared to traditional application-unaware snapshot and versioning implementations. Optimized file data is stored in suitcases. As a file is versioned, each new version of the file is placed in the same suitcase as the previous version, allowing embodiments to employ correlation techniques to enhance optimization savings.
    Type: Application
    Filed: October 7, 2009
    Publication date: April 8, 2010
    Applicant: OCARINA NETWORKS
    Inventors: Goutham Rao, Eric Brueggemann, Carter George
  • Publication number: 20090289819
    Abstract: Embodiments described herein relate to compression and decompression of data consisting of a one dimensional time series of floating point numbers. A compressor may comprise a lossless stage and in some embodiments a lossy stage in addition to the lossless stage. The lossy stage quantizes the data by discarding some of the least significant bits as specified by the user. The lossless stage uses a context mixing algorithm with two bit-wise predictive models whose predictions are combined and fed to an arithmetic coder. One model is a direct context model using the most significant bits of prior numeric samples as context. The other model is the output of an adaptive filter, in which the approximate predicted numeric value is used as context to model the actual value. A corresponding decompressor uses the same lossless model with the arithmetic coder replaced by an arithmetic decoder.
    Type: Application
    Filed: May 21, 2009
    Publication date: November 26, 2009
    Applicant: OCARINA NETWORKS
    Inventor: Matthew Mahoney
  • Publication number: 20090240718
    Abstract: Mechanisms are provided for optimizing file data compressed using deflate mechanisms such as the ZLIB Compressed Data Format Specification and the DEFLATE Compressed Data Format Specification. Deflate mechanisms output different deflate file data depending on specific file data parameters. An optimization tool decompresses the deflate file data and outputs an optimized data stream. When a client application attempts to access the deflate data, the tool deoptimizes the optimized data stream and applies the same deflate algorithm to generate deflate file data. Although the deflate algorithm is applied without using the file data parameters used to generate the original deflate file data, substitute deflate file data is produced.
    Type: Application
    Filed: March 21, 2008
    Publication date: September 24, 2009
    Applicant: OCARINA NETWORKS
    Inventors: Goutham Rao, Eric Brueggemann
  • Publication number: 20090216788
    Abstract: Mechanisms are provided for optimizing multiple files in an efficient format that allows maintenance of the original namespace. Multiple files and associated metadata are written to a suitcase file. The suitcase file includes index information for accessing compressed data associated with compacted files. A hardlink to the suitcase file includes an index number used to access the appropriate index information. A simulated link to a particular file maintains the name of the particular file prior to compaction.
    Type: Application
    Filed: February 27, 2008
    Publication date: August 27, 2009
    Applicant: OCARINA NETWORKS
    Inventors: Goutham Rao, Eric Brueggemann, Murali Bashyam, Carter George, Mark Taylor
  • Publication number: 20090216774
    Abstract: Mechanisms are provided for optimizing files while allowing application servers access to metadata associated with preoptimized versions of the files. During file optimization involving compression and/or compaction, file metadata changes. In order to allow file optimization in a manner transparent to application servers, the metadata associated with preoptimized versions of the files is maintained in a metadata database as well as in an optimized version of the files themselves.
    Type: Application
    Filed: February 27, 2008
    Publication date: August 27, 2009
    Applicant: OCARINA NETWORKS
    Inventors: Goutham Rao, Eric Brueggemann, Murali Bashyam, Carter George, Mark Taylor