Patents by Inventor Ashok Anand

Ashok Anand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9083708
    Abstract: An end host redundancy elimination system and method to provide redundancy elimination as an end system service. Embodiments of the system and method use optimization techniques that reduce server central processing unit (CPU) load and memory footprint as compared to existing approaches. For server storage, embodiments of the system and method use a suite of highly-optimized data structures for managing metadata and cached payloads. An optimized asymmetric max-match technique exploits the inherent structure in data maintained at the server and client and ensures that client processing load is negligible. A load-adaptive fingerprinting technique is used that is much faster than current fingerprinting techniques while still delivering similar compression. Load-adaptive means that embodiments of the fingerprinting technique can adapt CPU usage depending on server load.
    Type: Grant
    Filed: May 17, 2010
    Date of Patent: July 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ramachandran Ramjee, Bhavish Aggarwal, Pushkar Chitnis, George Varghese, Ashok Anand, Chitra Muthukrishnan, Athula Balachandran
  • Publication number: 20150138968
    Abstract: A capability is provided for scaling Redundancy Elimination (RE) middleboxes. The RE middleboxes include an RE encoding middlebox and an RE decoding middlebox. The RE middleboxes may employ max-match-based RE techniques or chunk-match-based RE techniques. The RE middleboxes may utilize Distributed Hash Tables (DHTs) to maintain content stores, respectively. The RE middleboxes may be scaled for use with cloud applications (e.g., for use in transfer of data between a customer network and a cloud side, for use in transfer of data between two cloud sites, or the like).
    Type: Application
    Filed: November 12, 2014
    Publication date: May 21, 2015
    Applicant: ALCATEL LUCENT
    Inventors: Mansoor A. Alicherry, Ashok Anand, Shoban Preeth Chandrabose
  • Patent number: 9002990
    Abstract: Processing a purge request is disclosed. The purge request is received. An availability state for each content distribution node in a group of content distribution nodes is tracked. Based on the purge request, one or more purge instructions are generated for one or more available state content distribution nodes of the group. Based on the purge request, one or more queued purge instructions are queued for one or more unavailable state content distribution nodes of the group. It is determined that the one or more available state content distribution nodes of the group have completed processing the one or more purge instructions generated for the one or more available state content distribution nodes. Based at least in part on the queuing of the one or more queued purge instructions for the one or more unavailable state nodes, an indication that the purge request has been completed is authorized.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: April 7, 2015
    Assignee: Instart Logic, Inc.
    Inventors: Ashok Anand, Manjunath Bharadwaj Subramanya
  • Patent number: 8959481
    Abstract: Techniques for co-relating at least one of a functional design and at least one implementation artifact of a solution with at least one infrastructure component of a target deployment environment for the solution are provided. The techniques include obtaining at least one of a functional design and at least one implementation artifact of a solution, obtaining at least one infrastructure component of a target deployment environment for the solution, and co-relating at least one of a functional design and at least one implementation artifact of a solution with at least one infrastructure component of a target deployment environment for the solution, wherein co-relating comprises discovering at least one system level dependency among the at least one of a functional design and at least one implementation artifact and the at least one infrastructure component.
    Type: Grant
    Filed: April 30, 2009
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ashok Anand, Dipayan Gangopadhyay, Manish Gupta, Manish Sethi
  • Patent number: 8891520
    Abstract: A capability is provided for scaling Redundancy Elimination (RE) middleboxes. The RE middleboxes include an RE encoding middlebox and an RE decoding middlebox. The RE middleboxes may employ max-match-based RE techniques or chunk-match-based RE techniques. The RE middleboxes may utilize Distributed Hash Tables (DHTs) to maintain content stores, respectively. The RE middleboxes may be scaled for use with cloud applications (e.g., for use in transfer of data between a customer network and a cloud side, for use in transfer of data between two cloud sites, or the like).
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: November 18, 2014
    Assignee: Alcatel Lucent
    Inventors: Mansoor A. Alicherry, Ashok Anand, Shoban Preeth Chandrabose
  • Publication number: 20140195658
    Abstract: A redundancy elimination (RE) capability is provided. The RE capability enables dynamic control over use of RE within a network. The dynamic control over use of RE within a network may include initial selection of the network locations at which RE is performed, dynamic modification of the network locations at which RE is performed, or the like. The dynamic control over use of RE within a network may include dynamic control over packet cache sizes of packet caches at the network locations at which RE is performed. The dynamic control over use of RE within a network may include determining RE component selection information for a set of nodes of the network and selecting a set of RE components for the set of nodes, from a set of available RE components of the network, based on the RE component selection information.
    Type: Application
    Filed: January 9, 2013
    Publication date: July 10, 2014
    Inventors: Krishna P. Puttaswamy Naga, Ashok Anand
  • Publication number: 20140195720
    Abstract: Aspects of the present invention provide high-performance indexing for data-intensive systems in which “slicing” is used to organize indexing data on an SSD such that related entries are located together. Slicing enables combining multiple reads into a single “slice read” of related items, offering high read performance. Small in-memory indexes, such as hash tables, bloom filters or LSH tables, may be used as buffers for insert operations to resolve slow random writes on the SSD. When full, these buffers are written to the SSD. The internal architecture of the SSD may also be leveraged to achieve higher performance via parallelism. Such parallelism may occur at the channel-level, the package-level, the die-level and/or the plane-level. Consequently, memory and compute resources are freed for use by higher layer applications, and better performance may be achieved.
    Type: Application
    Filed: January 9, 2013
    Publication date: July 10, 2014
    Applicant: Wisconsin Alumni Research Foundation
    Inventors: Srinivasa Akella, Ashok Anand, Aaron Gember
  • Publication number: 20140164618
    Abstract: Various embodiments provide a method and apparatus for dynamically allocating resources to processes by using unified resources. In particular, a superVM allows a process from an application to utilize resources (e.g., CPU, memory, and storage) from other VMs. Advantageously, sharing resources of of VMs that are operating below capacity increases cost efficiencies and providing resources without the overhead of spawning new VMs to VMs requiring additional resources increases application performance. Moreover, legacy applications may run utilize resources from multiple VMs without modification.
    Type: Application
    Filed: December 10, 2012
    Publication date: June 12, 2014
    Applicant: ALCATEL-LUCENT
    Inventors: Mansoor A. Alicherry, Ashok Anand, Shoban Preeth Chandrabose
  • Publication number: 20140068602
    Abstract: A virtual network virtual machine may be implemented on a cloud computing facility to control communication among virtual machines executing applications and virtual machines executing middlebox functions. This virtual network virtual machine may provide for automatic scaling of middleboxes according to a heuristic algorithm that monitors the effectiveness of each middlebox on the network performance as application virtual machines are scaled. The virtual machine virtual network may also locate virtual machines in actual hardware to further optimize performance.
    Type: Application
    Filed: September 4, 2012
    Publication date: March 6, 2014
    Inventors: Aaron Robert Gember, Robert Daniel Grandl, Theophilus Aderemi Benson, Ashok Anand, Srinivasa Aditya Akella
  • Publication number: 20140052973
    Abstract: Various embodiments provide a method and apparatus of providing an RE-aware technique for placing slots based on redundancy across and within slot communication pairs.
    Type: Application
    Filed: August 14, 2012
    Publication date: February 20, 2014
    Applicants: Alcatel-Lucent India Limited, Alcatel-Lucent USA Inc.
    Inventors: Krishna P. Puttaswamy Naga, Ashok Anand
  • Publication number: 20140003421
    Abstract: A capability is provided for scaling Redundancy Elimination (RE) middleboxes. The RE middleboxes include an RE encoding middlebox and an RE decoding middlebox. The RE middleboxes may employ max-match-based RE techniques or chunk-match-based RE techniques. The RE middleboxes may utilize Distributed Hash Tables (DHTs) to maintain content stores, respectively. The RE middleboxes may be scaled for use with cloud applications (e.g., for use in transfer of data between a customer network and a cloud side, for use in transfer of data between two cloud sites, or the like).
    Type: Application
    Filed: June 28, 2012
    Publication date: January 2, 2014
    Inventors: Mansoor A. Alicherry, Ashok Anand, Shoban Preeth Chandrabose
  • Patent number: 8509237
    Abstract: A network employing redundancy-aware hardware may actively allocate decompression tasks among different devices along a single path to improve data throughput. The allocation can be performed by a hash or similar process operating on a header of the packets to distribute caching according to predefined ranges of hash values without significant additional communication overhead. Decompression of packets may be similarly distributed by marking shim values to match the earlier caching of antecedent packets. Nodes may use coordinated cache sizes and organizations to eliminate the need for separate cache protocol communications.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: August 13, 2013
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Srinivasa Aditya Akella, Ashok Anand, Vyas Sekar
  • Patent number: 8261280
    Abstract: A method for preventing deadlock in a distributed computing system includes the steps of: receiving as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; populating at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; storing within each container at least a portion of the table; and allocating one or more threads in a given container according to at least a portion of the table stored within the given container.
    Type: Grant
    Filed: May 28, 2008
    Date of Patent: September 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ashok Anand, Manish Sethi
  • Patent number: 8161484
    Abstract: A system for preventing deadlock in a distributed computing system includes a memory and at least one processor coupled to the memory. The processor is operative: to receive as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; to populate at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; to store within each container at least a portion of the at least one table; and to allocate one or more threads in a given container according to at least a portion of the at least one table stored within the given container.
    Type: Grant
    Filed: May 29, 2008
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ashok Anand, Manish Sethi
  • Patent number: 8087022
    Abstract: A system for preventing deadlock in a distributed computing system includes a memory and at least one processor coupled to the memory. The processor is operative: to receive as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; to populate at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; to store within each container at least a portion of the at least one table; and to allocate one or more threads in a given container according to at least a portion of the at least one table stored within the given container.
    Type: Grant
    Filed: November 27, 2007
    Date of Patent: December 27, 2011
    Assignee: International Business Machines Corporation
    Inventors: Ashok Anand, Manish Sethi
  • Publication number: 20110282932
    Abstract: An end host redundancy elimination system and method to provide redundancy elimination as an end system service. Embodiments of the system and method use optimization techniques that reduce server central processing unit (CPU) load and memory footprint as compared to existing approaches. For server storage, embodiments of the system and method use a suite of highly-optimized data structures for managing metadata and cached payloads. An optimized asymmetric max-match technique exploits the inherent structure in data maintained at the server and client and ensures that client processing load is negligible. A load-adaptive fingerprinting technique is used that is much faster than current fingerprinting techniques while still delivering similar compression. Load-adaptive means that embodiments of the fingerprinting technique can adapt CPU usage depending on server load.
    Type: Application
    Filed: May 17, 2010
    Publication date: November 17, 2011
    Applicant: Microsoft Corporation
    Inventors: Ramachandran Ramjee, Bhavish Aggarwal, Pushkar Chitnis, George Varghese, Ashok Anand, Chitra Muthukrishnan, Athula Balachandran
  • Patent number: 8060878
    Abstract: A method for preventing deadlock in a distributed computing system includes the steps of: receiving as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; populating at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; storing within each container at least a portion of the table; and allocating one or more threads in a given container according to at least a portion of the table stored within the given container.
    Type: Grant
    Filed: November 27, 2007
    Date of Patent: November 15, 2011
    Assignee: International Business Machines Corporation
    Inventors: Ashok Anand, Manish Sethi
  • Publication number: 20100329256
    Abstract: A network employing redundancy-aware hardware may actively allocate decompression tasks among different devices along a single path to improve data throughput. The allocation can be performed by a hash or similar process operating on a header of the packets to distribute caching according to predefined ranges of hash values without significant additional communication overhead. Decompression of packets may be similarly distributed by marking shim values to match the earlier caching of antecedent packets. Nodes may use coordinated cache sizes and organizations to eliminate the need for separate cache protocol communications.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Inventors: Srinivasa Aditya Akella, Ashok Anand, Vyas Sekar
  • Publication number: 20100281455
    Abstract: Techniques for co-relating at least one of a functional design and at least one implementation artifact of a solution with at least one infrastructure component of a target deployment environment for the solution are provided. The techniques include obtaining at least one of a functional design and at least one implementation artifact of a solution, obtaining at least one infrastructure component of a target deployment environment for the solution, and co-relating at least one of a functional design and at least one implementation artifact of a solution with at least one infrastructure component of a target deployment environment for the solution, wherein co-relating comprises discovering at least one system level dependency among the at least one of a functional design and at least one implementation artifact and the at least one infrastructure component.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 4, 2010
    Applicant: International Business Machines Corporation
    Inventors: Ashok Anand, Dipayan Gangopadhyay, Manish Gupta, Manish Sethi
  • Publication number: 20100254378
    Abstract: A network employing multiple redundancy-aware routers that can eliminate the transmission of redundant data is greatly improved by steering redundant data preferentially into common data paths possibly contrary to other routing paradigms. By collecting redundant data in certain pathways, the effectiveness of the redundancy-aware routers is substantially increased.
    Type: Application
    Filed: June 8, 2009
    Publication date: October 7, 2010
    Inventors: Srinivasa Aditya Akella, Ashok Anand, Srinivasan Seshan