Patents by Inventor Joaquin J. Aviles
Joaquin J. Aviles has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9426247Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications.Type: GrantFiled: June 23, 2014Date of Patent: August 23, 2016Assignee: NetApp, Inc.Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Patent number: 9357030Abstract: A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using clustered cache appliances with packet inspection intelligence. A cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).Type: GrantFiled: July 31, 2015Date of Patent: May 31, 2016Assignee: NetApp, Inc.Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Publication number: 20160036938Abstract: A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using clustered cache appliances with packet inspection intelligence. A cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).Type: ApplicationFiled: July 31, 2015Publication date: February 4, 2016Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Patent number: 9143566Abstract: A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using spliced cache appliances with packet inspection intelligence. A cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).Type: GrantFiled: January 16, 2008Date of Patent: September 22, 2015Assignee: NetApp, Inc.Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Patent number: 9130968Abstract: A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using clustered cache appliances with packet inspection intelligence. A cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).Type: GrantFiled: January 16, 2008Date of Patent: September 8, 2015Assignee: NetApp, Inc.Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Publication number: 20140310373Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications.Type: ApplicationFiled: June 23, 2014Publication date: October 16, 2014Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Patent number: 8805949Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications.Type: GrantFiled: January 16, 2008Date of Patent: August 12, 2014Assignee: NetApp, Inc.Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Patent number: 8103764Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a perfect hashing memory index technique to rapidly detect predetermined patterns in received packet payloads and retrieve matching patterns from memory by generating a data structure pointer and index offset to directly address the pattern in the datagram memory, thereby accelerating evaluation of the packet with the matching pattern by the host processor.Type: GrantFiled: October 14, 2008Date of Patent: January 24, 2012Assignee: CacheIQ, Inc.Inventor: Joaquin J. Aviles
-
Patent number: 7979671Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a dual hash technique to rapidly store and/or retrieve connection state information for cached connections in a plurality of index tables that are indexed by hashing network protocol address information with a pair of irreducible CRC hash algorithms to obtain an index to the memory location of the connection state information.Type: GrantFiled: July 28, 2008Date of Patent: July 12, 2011Assignee: CacheIQ, Inc.Inventor: Joaquin J. Aviles
-
Patent number: 7941591Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a multi-rank flash DIMM cache memory by pipelining multiple page write and page program operations to different flash memory ranks, thereby improving write speeds to the flash DIMM cache memory.Type: GrantFiled: July 28, 2008Date of Patent: May 10, 2011Assignee: CacheIQ, Inc.Inventor: Joaquin J. Aviles
-
Publication number: 20100095064Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a perfect hashing memory index technique to rapidly detect predetermined patterns in received packet payloads and retrieve matching patterns from memory by generating a data structure pointer and index offset to directly address the pattern in the datagram memory, thereby accelerating evaluation of the packet with the matching pattern by the host processor.Type: ApplicationFiled: October 14, 2008Publication date: April 15, 2010Inventor: Joaquin J. Aviles
-
Publication number: 20100023674Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a multi-rank flash DIMM cache memory by pipelining multiple page write and page program operations to different flash memory ranks, thereby improving write speeds to the flash DIMM cache memory.Type: ApplicationFiled: July 28, 2008Publication date: January 28, 2010Inventor: Joaquin J. Aviles
-
Publication number: 20100023726Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a dual hash technique to rapidly store and/or retrieve connection state information for cached connections in a plurality of index tables that are indexed by hashing network protocol address information with a pair of irreducible CRC hash algorithms to obtain an index to the memory location of the connection state information.Type: ApplicationFiled: July 28, 2008Publication date: January 28, 2010Inventor: Joaquin J. Aviles
-
Publication number: 20090182836Abstract: A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications.Type: ApplicationFiled: January 16, 2008Publication date: July 16, 2009Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Publication number: 20090182835Abstract: A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using spliced cache appliances with packet inspection intelligence. A cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).Type: ApplicationFiled: January 16, 2008Publication date: July 16, 2009Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Publication number: 20090182945Abstract: A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using clustered cache appliances with packet inspection intelligence. A cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).Type: ApplicationFiled: January 16, 2008Publication date: July 16, 2009Inventors: Joaquin J. Aviles, Mark U. Cree, Gregory A. Dahl
-
Patent number: 7134143Abstract: A pattern matching engine supports high speed (up to at least 2.4. Gbits per second line rate speeds) parallel pattern matching operations in an unanchored fashion. The engine is preferably implemented as a hardware device. A shift register serially receives a string of data stream bytes which are partitioned into a plurality of multi-byte overlapping adjacent stream chunks. Library patterns of bytes to be searched for are similarly partitioned into multi-byte overlapping adjacent table chunks for storage in a look-up table. The plurality of multi-byte overlapping adjacent stream chunks are applied by the register in parallel to the look-up table, with a result being returned which is indicative of whether each stream chunk matches one of the look-up table stored table chunks. The results of the parallel look-up operation are then logically combined to make a match determination.Type: GrantFiled: February 4, 2003Date of Patent: November 7, 2006Inventors: Gerald S. Stellenberg, Joaquin J. Aviles
-
Publication number: 20040151382Abstract: A pattern matching engine supports high speed (up to at least 2.4. Gbits per second line rate speeds) parallel pattern matching operations in an unanchored fashion. The engine is preferably implemented as a hardware device. A shift register serially receives a string of data stream bytes which are partitioned into a plurality of multi-byte overlapping adjacent stream chunks. Library patterns of bytes to be searched for are similarly partitioned into multi-byte overlapping adjacent table chunks for storage in a look-up table. The plurality of multi-byte overlapping adjacent stream chunks are applied by the register in parallel to the look-up table, with a result being returned which is indicative of whether each stream chunk matches one of the look-up table stored table chunks. The results of the parallel look-up operation are then logically combined to make a match determination.Type: ApplicationFiled: February 4, 2003Publication date: August 5, 2004Applicant: TippingPoint Technologies, Inc.Inventors: Gerald S. Stellenberg, Joaquin J. Aviles
-
Patent number: 6161161Abstract: A system and method for coupling a local bus to a PCI bus are provided. The system comprises a local bus interface for receiving signals from a microprocessor through a local address/data bus and a local control bus. The system further comprises a bus translator coupled to the local bus interface. The bus translator determines if the microprocessor will access a target peripheral PCI device coupled to the local address/data bus. A PCI control bus interface is coupled to the bus translator and signals the target PCI peripheral device via a PCI control bus, such that the local address/data bus and the PCI control bus define a PCI bus.Type: GrantFiled: January 8, 1999Date of Patent: December 12, 2000Assignee: Cisco Technology, Inc.Inventors: Craig D. Botkin, Joaquin J. Aviles, Ronald E. Battles