Patents by Inventor Arun K. Iyengar
Arun K. Iyengar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9584620Abstract: Embodiments include method, systems and computer program products for caching in storage clients. In some embodiments, a storage client for accessing a storage service from a computer program may be provided. A cache may be integrated within the storage client for reducing a number of accesses to the storage service. An application may be used the cache to reduce accesses to the storage service, wherein the application is implemented by a computer program. In response to the storage service being unresponsive or responding too slowly, the application may use the cache to allow the application to continue without communicating with the storage service.Type: GrantFiled: December 31, 2015Date of Patent: February 28, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Arun K. Iyengar
-
Patent number: 9436614Abstract: In a computing system including an application executing on top of a virtualization control layer, wherein the virtualization control layer maps portions of a virtual memory to portions of a physical memory, a method for managing memory including: identifying, by the application, a range of virtual memory whose probability of being replicated in the virtual memory exceeds a given threshold; obtaining, by the application, at least one memory address corresponding to the range of virtual memory; and passing, from the application to the virtualization control layer, an identifier for the range of virtual memory and the memory address corresponding to the range of virtual memory, wherein the identifier is useable by the virtualization control layer to identify similar ranges within the virtual memory.Type: GrantFiled: May 2, 2013Date of Patent: September 6, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Michael H. Dawson, Arun K. Iyengar, Graeme Johnson
-
Patent number: 9355039Abstract: In a computing system including an application executing on top of a virtualization control layer, wherein the virtualization control layer maps portions of a virtual memory to portions of a physical memory, an apparatus for managing memory configured to: identify, by the application, a range of virtual memory whose probability of being replicated in the virtual memory exceeds a given threshold; obtain, by the application, at least one memory address corresponding to the range of virtual memory; and pass, from the application to the virtualization control layer, an identifier for the range of virtual memory and the memory address corresponding to the range of virtual memory, wherein the identifier is useable by the virtualization control layer to identify similar ranges within the virtual memory.Type: GrantFiled: August 12, 2013Date of Patent: May 31, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Michael H. Dawson, Arun K. Iyengar, Graeme Johnson
-
Patent number: 9218100Abstract: An embodiment of the invention includes a system for partitioning asset management plugins. The system includes an application program interface for performing basic CRUD functions on assets having multiple asset types. At least one plugin having plugin components is provided, wherein the plugin manages at least one asset having a specific asset type (of the multiple asset types). The plugin components include a CRUD component, a state component, an actions component, and/or a view component. The system further includes plugin containers for hosting the plugin components, the plugin containers include at least one client-side plugin container and at least one server-side plugin container. The plugin components are partitioned and distributed from the plugin components to the plugin containers by a plugin server based on capabilities of the client.Type: GrantFiled: March 4, 2010Date of Patent: December 22, 2015Assignee: International Business Machines CorporationInventors: Judah M. Diament, Grant J. Larsen, Arun K. Iyengar, Thomas A. Mikalsen, Isabelle M. Rouvellou, Ignacio Silva-Lepe, Revathi Subramanian
-
Patent number: 8909737Abstract: Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server.Type: GrantFiled: September 4, 2013Date of Patent: December 9, 2014Assignee: International Business Machines CorporationInventors: Wei Gao, Arun K. Iyengar, Mudhakar Srivatsa
-
Publication number: 20140331017Abstract: In a computing system including an application executing on top of a virtualization control layer, wherein the virtualization control layer maps portions of a virtual memory to portions of a physical memory, an apparatus for managing memory configured to: identify, by the application, a range of virtual memory whose probability of being replicated in the virtual memory exceeds a given threshold; obtain, by the application, at least one memory address corresponding to the range of virtual memory; and pass, from the application to the virtualization control layer, an identifier for the range of virtual memory and the memory address corresponding to the range of virtual memory, wherein the identifier is useable by the virtualization control layer to identify similar ranges within the virtual memory.Type: ApplicationFiled: August 12, 2013Publication date: November 6, 2014Inventors: Michael H. Dawson, Arun K. Iyengar, Graeme Johnson
-
Publication number: 20140123155Abstract: Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes.Type: ApplicationFiled: January 8, 2014Publication date: May 1, 2014Applicant: International Business Machines CorporationInventors: Paul M. Dantzig, Arun K. Iyengar, Francis N. Parr, Gong Su
-
Publication number: 20140122320Abstract: Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes.Type: ApplicationFiled: January 8, 2014Publication date: May 1, 2014Applicant: International Business Machines CorporationInventors: Paul M. Dantzig, Arun K. Iyengar, Francis N. Parr, Gong Su
-
Publication number: 20140067989Abstract: Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server.Type: ApplicationFiled: September 4, 2013Publication date: March 6, 2014Applicant: International Business Machines CorporationInventors: Wei Gao, Arun K. Iyengar, Mudhakar Srivatsa
-
Publication number: 20130332507Abstract: Techniques for maintaining high availability servers are disclosed. For example, a method comprises the following steps. One or more client requests are provided to a first server for execution therein. The one or more client requests are also provided to a second server for storage therein. In response to the first server failing, the second server is configured to execute at least one client request of the one or more client requests provided to the first server and the second server that is not properly executed by the first server.Type: ApplicationFiled: June 6, 2012Publication date: December 12, 2013Applicant: International Business Machines CorporationInventors: Juan Du, Arun K. Iyengar, Gong Su
-
Patent number: 8601479Abstract: Embodiments of the invention broadly contemplate systems, methods and arrangements for processing multi-leg transactions. Embodiments of the invention process multi-leg transactions while allowing later arrived orders to get processed during the time when an earlier, tradable multi-leg transaction is pending using a look-ahead mechanism without violating any relevant timing or exchange rules.Type: GrantFiled: September 28, 2009Date of Patent: December 3, 2013Assignee: International Business Machines CorporationInventors: Arun K. Iyengar, Gong Su, Yanqi Wang, Yu Yuan, Jia Zou
-
Patent number: 8577993Abstract: Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server.Type: GrantFiled: May 20, 2011Date of Patent: November 5, 2013Assignee: International Business Machines CorporationInventors: Wei Gao, Arun K. Iyengar, Mudhakar Srivatsa
-
Publication number: 20120297008Abstract: Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server.Type: ApplicationFiled: May 20, 2011Publication date: November 22, 2012Applicant: International Business Machines CorporationInventors: Wei Gao, Arun K. Iyengar, Mudhakar Srivatsa
-
Patent number: 8250631Abstract: According to an embodiment of the invention, a system for processing a plurality of service requests in a client-server system includes a challenge server for: presenting a cryptographic challenge to the client; initializing a trust cookie that encodes a client's initial priority level after the client correctly solves the cryptographic challenge; computing a trust level score for the client based on a service request wherein said trust level score is associated with an amount of resources expended by the server in handling the service request such that a higher trust level score is computed for service requests consuming less system resources; assigning the trust level score to the client based on the computation; and embedding the assigned trust level score in the trust cookie included in all responses sent from the server to the client. The system further includes an application server coupled with a firewall.Type: GrantFiled: April 9, 2010Date of Patent: August 21, 2012Assignee: International Business Machines CorporationInventors: Arun K Iyengar, Mudhakar Srivatsa, Jian Yin
-
Publication number: 20110252127Abstract: A method and system for distributing requests to multiple back-end servers in client-server environments. A front-end load balancer is used to send requests to multiple back-end servers. In appropriate cases, the load balancer will send requests to the servers based on affinity requirements, while maintaining load balance among servers.Type: ApplicationFiled: April 13, 2010Publication date: October 13, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Arun K. Iyengar, Hongbo Jiang, Erich M. Nahum, Wolfgang Segmuller, Asser N. Tantawi, Charles P. Wright
-
Patent number: 8032586Abstract: A method, a system, an apparatus, and a computer program product are presented for a fragment caching methodology. After a message is received at a computing device that contains a cache management unit, a fragment in the message body of the message is cached. Subsequent requests for the fragment at the cache management unit result in a cache hit. A FRAGMENTLINK tag is used to specify the location in a fragment for an included or linked fragment which is to be inserted into the fragment during fragment or page assembly or page rendering. A FRAGMENTLINK tag may include a FOREACH attribute that is interpreted as indicating that the FRAGMENTLINK tag should be replaced with multiple FRAGMENTLINK tags. The FOREACH attribute has an associated parameter that has multiple values that are used in identifying multiple fragments for the multiple FRAGMENTLINK tags.Type: GrantFiled: June 21, 2007Date of Patent: October 4, 2011Assignee: International Business Machines CorporationInventors: James R. H. Challenger, Michael H. Conner, George P. Copeland, Arun K. Iyengar
-
Publication number: 20110219311Abstract: An embodiment of the invention includes a system for partitioning asset management plugins. The system includes an application program interface for performing basic CRUD functions on assets having multiple asset types. At least one plugin having plugin components is provided, wherein the plugin manages at least one asset having a specific asset type (of the multiple asset types). The plugin components include a CRUD component, a state component, an actions component, and/or a view component. The system further includes plugin containers for hosting the plugin components, the plugin containers include at least one client-side plugin container and at least one server-side plugin container. The plugin components are partitioned and distributed from the plugin components to the plugin containers by a plugin server based on capabilities of the client.Type: ApplicationFiled: March 4, 2010Publication date: September 8, 2011Applicant: International Business Machines CorporationInventors: Judah M. Diament, Grant J. Larsen, Arun K. Iyengar, Thomas A. Mikalsen, Isabelle M. Rouvellou, Ignacio Silva-Lepe, Revathi Subramanian
-
Patent number: 7987239Abstract: A method, a system, an apparatus, and a computer program product are presented for a fragment caching methodology. After a message is received at a computing device, a fragment in the message body is cached. Cache ID rules from an origin server accompany a fragment to describe a method for forming a unique cache ID for the fragment such that dynamic content can be cached away from an origin server. A cache ID may be based on a URI and/or query parameters and/or cookies that are associated with a fragment. After user authentication, a cookie containing the user's role may be used in subsequent requests for role-specific fragments and in the cache identifier for role-specific fragments, thereby allowing requests from other users for role-specific fragments to be resolved in the cache when the users have the same role because these users would also have the same cookie.Type: GrantFiled: September 13, 2007Date of Patent: July 26, 2011Assignee: International Business Machines CorporationInventors: Rajesh S. Agarwalla, James R. H. Challenger, George P. Copeland, Arun K. Iyengar, Mark H. Linehan, Subbarao Meduri
-
Publication number: 20110078686Abstract: Embodiments of the invention provide a coordinated transaction processing system capable of providing primary-primary high availability as well as minimal response time to queries via utilization of a virtual reply system between partner nodes. One or more global queues ensure peer nodes are synchronized.Type: ApplicationFiled: September 28, 2009Publication date: March 31, 2011Applicant: International Business Machines CorporationInventors: Arun K. Iyengar, Gong Su, Yanqi Wang, Yu Yuan, Jia Zou
-
Publication number: 20110078685Abstract: Embodiments of the invention broadly contemplate systems, methods and arrangements for processing multi-leg transactions. Embodiments of the invention process multi-leg transactions while allowing later arrived orders to get processed during the time when an earlier, tradable multi-leg transaction is pending using a look-ahead mechanism without violating any relevant timing or exchange rules.Type: ApplicationFiled: September 28, 2009Publication date: March 31, 2011Applicant: International Business Machines CorporationInventors: Arun K. Iyengar, Gong Su, Yanqi Wang, Yu Yuan, Jia Zou