Patents by Inventor Anil K. Ruia
Anil K. Ruia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10237625Abstract: A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.Type: GrantFiled: October 11, 2017Date of Patent: March 19, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin, Chittaranjan Pattekar
-
Publication number: 20180160193Abstract: A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.Type: ApplicationFiled: October 11, 2017Publication date: June 7, 2018Inventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin, Chittaranjan Pattekar
-
Patent number: 9807468Abstract: A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.Type: GrantFiled: June 16, 2009Date of Patent: October 31, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin, Chittaranjan Pattekar
-
Patent number: 9514243Abstract: An intelligent caching system is described herein that intelligently consolidates the name-value pairs in content requests containing query strings so that only substantially non-redundant responses are cached, thereby saving cache proxy resources. The intelligent caching system determines which name-value pairs in the query string can affect the redundancy of the content response and which name-value pairs can be ignored. The intelligent caching system organically builds the list of relevant name-value pairs by relying on a custom response header or other indication from the content server. Thus, the intelligent caching system results in fewer requests to the content server as well as fewer objects in the cache.Type: GrantFiled: December 3, 2009Date of Patent: December 6, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Won Suk Yoo, Venkat Raman Don, Anil K. Ruia, Ning Lin, Chittaranjan Pattekar
-
Patent number: 9058252Abstract: Requests for content can be received from clients and forwarded to servers, and responses to the requests can be received from the servers and forwarded to the clients. A health model can also be maintained. The health model can be based on information in the responses and possibly also on information in the requests, and the health model can indicate the health of the servers in responding to different types of requests. The health model may differentiate between health in responding to requests with different features in URLs of the requests, such as different namespaces and/or different extensions.Type: GrantFiled: March 24, 2010Date of Patent: June 16, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Won Suk Yoo, Wade A. Hilmo, Anil K. Ruia, Chittaranjan Pattekar, Venkat Raman Don
-
Patent number: 8533333Abstract: The routing of requests in a shared hosting environment. The shared hosting environment includes a network farm of servers, each capable of processing the request corresponding to the host name. When a request is received, a router determines whether or not there is already a host name affinitization of subset of the servers to the host name corresponding to the request. If so, the message is dispatched to one of those affinitized servers. If not, one or more of the servers are affinitized to the host name to create a subset of affinitized server(s) for that host name. Different host names may have different subsets of servers that they are affinitized to. Over time, the affinitization may be terminated as appropriate.Type: GrantFiled: September 3, 2008Date of Patent: September 10, 2013Assignee: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Michael E. Brown, William James Staples, Himanshu Kamarajbhai Patel
-
Patent number: 8478813Abstract: Architecture that facilitates the capture of connection state of a connection established between a client and an intermediate server and forwards the state to one or more target servers. A software component at the target server (as well as the intermediate server) uses this connection state to reply back to the client directly, thereby bypassing the intermediate server. All packets from the client related to the request are received at the intermediate server and then forwarded to the target server. The migration can be accomplished without any change in the client operating system and client applications, without assistance from a gateway device such as a load balancer or the network, without duplication of all packets between the multiple servers, and without changes to the transport layer stack of the intermediate and target servers.Type: GrantFiled: April 28, 2010Date of Patent: July 2, 2013Assignee: Microsoft CorporationInventors: Randall Kern, Parveen Patel, Lihua Yuan, Anil K. Ruia, Wok Suk Yoo
-
Patent number: 8266680Abstract: A client system and a server system use a Hypertext Transfer Protocol (HTTP) authentication mode preference header to negotiate an HTTP authentication mode. The client system sends an HTTP request to the server system. In response to the HTTP request, the server system sends an HTTP response to the client system. The HTTP response includes an HTTP authentication mode preference header. The HTTP authentication mode preference header indicates whether a preferred HTTP authentication mode is connection-based HTTP authentication or request-based HTTP authentication. In subsequent HTTP requests to the server system, the client system uses the HTTP authentication mode indicated by the HTTP authentication mode preference header.Type: GrantFiled: March 31, 2009Date of Patent: September 11, 2012Assignee: Microsoft CorporationInventors: Rick James, Jonathan Silvera, Matthew Cox, Paul J. Leach, Anil K. Ruia, Anish V. Desai
-
Patent number: 8073829Abstract: URL rewriting is a common technique for allowing users to interact with internet resources using easy to remember and search engine friendly URLs. When URL rewriting involves conditions derived for sources other than the URL, inconsistencies in HTTP kernel cache and HTTP user output cache may arise. Methods and a system for rewriting a URL while preserving cache integrity are disclosed herein. Conditions used by a rule set to rewrite a URL may be determined as cache friendly conditions or cache unfriendly conditions. If cache unfriendly conditions exist, the HTTP kernel cache is disabled and the HTTP user output cache is varied based upon a key. If no cache unfriendly conditions exist, then the HTTP kernel cache is not disabled and the HTTP user output cache is not varied. A rule set is applied to the URL and a URL rewrite is performed to create a rewritten URL.Type: GrantFiled: November 24, 2008Date of Patent: December 6, 2011Assignee: Microsoft CorporationInventors: Daniel Vasquez Lopez, Ruslan A. Yakushev, Anil K. Ruia, Wade A. Hilmo
-
Patent number: 8073952Abstract: A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.Type: GrantFiled: April 22, 2009Date of Patent: December 6, 2011Assignee: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin
-
Publication number: 20110270908Abstract: Architecture that facilitates the capture of connection state of a connection established between a client and an intermediate server and forwards the state to one or more target servers. A software component at the target server (as well as the intermediate server) uses this connection state to reply back to the client directly, thereby bypassing the intermediate server. All packets from the client related to the request are received at the intermediate server and then forwarded to the target server. The migration can be accomplished without any change in the client operating system and client applications, without assistance from a gateway device such as a load balancer or the network, without duplication of all packets between the multiple servers, and without changes to the transport layer stack of the intermediate and target servers.Type: ApplicationFiled: April 28, 2010Publication date: November 3, 2011Applicant: Microsoft CorporationInventors: Randall Kern, Parveen Patel, Lihua Yuan, Anil K. Ruia, Won Suk Yoo
-
Patent number: 8046432Abstract: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.Type: GrantFiled: April 17, 2009Date of Patent: October 25, 2011Assignee: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, John A. Bocharov, Ning Lin
-
Publication number: 20110238733Abstract: Requests for content can be received from clients and forwarded to servers, and responses to the requests can be received from the servers and forwarded to the clients. A health model can also be maintained. The health model can be based on information in the responses and possibly also on information in the requests, and the health model can indicate the health of the servers in responding to different types of requests. The health model may differentiate between health in responding to requests with different features in URLs of the requests, such as different namespaces and/or different extensions.Type: ApplicationFiled: March 24, 2010Publication date: September 29, 2011Applicant: Microsoft CorporationInventors: Won Suk Yoo, Wade A. Hilmo, Anil K. Ruia, Chittaranjan Pattekar, Venkat Raman Don
-
Publication number: 20110137888Abstract: An intelligent caching system is described herein that intelligently consolidates the name-value pairs in content requests containing query strings so that only substantially non-redundant responses are cached, thereby saving cache proxy resources. The intelligent caching system determines which name-value pairs in the query string can affect the redundancy of the content response and which name-value pairs can be ignored. The intelligent caching system organically builds the list of relevant name-value pairs by relying on a custom response header or other indication from the content server. Thus, the intelligent caching system results in fewer requests to the content server as well as fewer objects in the cache.Type: ApplicationFiled: December 3, 2009Publication date: June 9, 2011Applicant: Microsoft CorporationInventors: Won Suk Yoo, Venkat Raman Don, Anil K. Ruia, Ning Lin, Chittaranjan Pattekar
-
Publication number: 20110131341Abstract: A selective pre-caching system reduces the amount of content cached at cache proxies by limiting the cached content to that content that a particular cache proxy is responsible for caching. This can substantially reduce the content stored on each cache proxy and reduces the amount of resources consumed for pre-caching in preparation for a particular event. The cache proxy receives a list of content items that and an indication of the topology of the cache network. The cache proxy uses the received topology to determine the content items in the received list of content items that the cache proxy is responsible for caching. The cache proxy then retrieves the determined content items so that they are available in the cache before client requests are received.Type: ApplicationFiled: November 30, 2009Publication date: June 2, 2011Applicant: Microsoft CorporationInventors: Won Suk Yoo, Venkat Raman Don, Anil K. Ruia, Ning Lin, Chittaranjan Pattekar
-
Patent number: 7925785Abstract: Dynamically upsizing and/or downsizing a network farm in response to network demand. An application message router routes messages to the network farm. When the network farm approaches or is anticipated to be approaching capacity, a group of one or more servers may be added to the network farm. When the added server(s) are capable of participating in the network farm, the application message router is triggered to route also to the added servers. When the network farm has excess capacity, a group of one or more servers may be dropped from the network farm. This may be accomplished by triggering the application message router to no longer route messages to the removed servers. The removed servers may be either immediately or gracefully removed from service.Type: GrantFiled: June 27, 2008Date of Patent: April 12, 2011Assignee: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Michael E. Brown
-
Publication number: 20100318632Abstract: A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.Type: ApplicationFiled: June 16, 2009Publication date: December 16, 2010Applicant: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin, Chittaranjan Pattekar
-
Publication number: 20100274885Abstract: A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.Type: ApplicationFiled: April 22, 2009Publication date: October 28, 2010Applicant: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin
-
Publication number: 20100268789Abstract: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.Type: ApplicationFiled: April 17, 2009Publication date: October 21, 2010Applicant: Microsoft CorporationInventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, John A. Bocharov, Ning Lin
-
Publication number: 20100251338Abstract: A client system and a server system use a Hypertext Transfer Protocol (HTTP) authentication mode preference header to negotiate an HTTP authentication mode. The client system sends an HTTP request to the server system. In response to the HTTP request, the server system sends an HTTP response to the client system. The HTTP response includes an HTTP authentication mode preference header. The HTTP authentication mode preference header indicates whether a preferred HTTP authentication mode is connection-based HTTP authentication or request-based HTTP authentication. In subsequent HTTP requests to the server system, the client system uses the HTTP authentication mode indicated by the HTTP authentication mode preference header.Type: ApplicationFiled: March 31, 2009Publication date: September 30, 2010Applicant: Microsoft CorporationInventors: Rick James, Jonathan Silvera, Matthew Cox, Paul J. Leach, Anil K. Ruia, Anish V. Desai