Patents by Inventor Haricharan K. Ramachandra
Haricharan K. Ramachandra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10191657Abstract: The disclosed embodiments provide a system for detecting and managing memory inefficiency in a software program. During operation, the system obtains a first snapshot of a heap for a software program, wherein the first snapshot includes a first set of objects stored in the heap at a first time. Next, the system applies a compression technique to the first snapshot to obtain a first set of inefficiency metrics for the first set of objects, wherein each inefficiency metric in the first set of inefficiency metrics represents a memory inefficiency of an object in the heap at the first time. The system then outputs the first set of inefficiency metrics with additional attributes of the first set of objects to improve identification of the memory inefficiency in the software program.Type: GrantFiled: December 15, 2015Date of Patent: January 29, 2019Assignee: Microsoft Technology Licensing, LLCInventors: John W. Nicol, Cuong H. Tran, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Patent number: 10019340Abstract: A system, apparatus, and methods are provided for managing on-demand profiling of one or more instances of a software application executing on a plurality of machines within one or more data centers. During operation, the system executes the one or more instances of the software application on the plurality of machines. Next, the system publishes, to a command channel, a command message that comprises a profiling request, wherein the profiling request specifies a subset of the machines. The system then receives, via a data channel, one or more data messages from the subset of the machines, wherein the data messages comprise data gathered by the subset of the machines in response to receiving the command message. Next, the system then evaluates the performance of the software application by aggregating and processing the data messages. Responsive to detecting an anomaly in the performance, the system then executes one or more remedies.Type: GrantFiled: March 21, 2016Date of Patent: July 10, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John W. Nicol, Zhenyun Zhuang, Arman H. Boehm, Tao Feng, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Patent number: 9959215Abstract: The disclosed embodiments provide a system for processing data. During operation, the system obtains an attribute of a stack trace of a software program. Next, the system uses the attribute to select an address-translation instance from a set of address-translation instances for processing the stack trace. The system then provides the stack trace to the selected address-translation instance for use in translating a set of memory addresses in the stack trace into a set of symbols of instructions stored at the memory addresses.Type: GrantFiled: December 10, 2015Date of Patent: May 1, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Arman H. Boehm, Anant R. Rao, Jui Ting Weng, Haricharan K. Ramachandra
-
Patent number: 9952787Abstract: The disclosed embodiments provide a system for detecting and managing inefficiency in external services. During operation, the system obtains a snapshot of a data stream transmitted over an external service from a computer system at a first time. Next, the system applies a compression technique to the snapshot to obtain a set of inefficiency metrics for a set of data elements in the snapshot. The system then outputs the set of inefficiency metrics with additional attributes of the data stream to improve identification of inefficiency in the data stream.Type: GrantFiled: May 20, 2016Date of Patent: April 24, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John W. Nicol, Ritesh Maheshwari, Nicholas P. Baggott, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Patent number: 9952772Abstract: The disclosed embodiments provide a system for detecting and managing inefficiency in local storage. During operation, the system obtains a first snapshot of data in local storage of a computer system, wherein the first snapshot comprises a first set of data elements in the local storage at a first time. Next, the system applies a compression technique to the first snapshot to obtain a first set of inefficiency metrics for the first set of data elements. The system then outputs the first set of inefficiency metrics with additional attributes of the data to improve management of inefficiency in the data.Type: GrantFiled: May 20, 2016Date of Patent: April 24, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John W. Nicol, Ritesh Maheshwari, Nicholas P. Baggott, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Patent number: 9886195Abstract: The disclosed embodiments provide a system for analyzing data from a monitored system. During operation, the system identifies a difference between a performance of an application and a service-level agreement (SLA) of the application. Next, the system determines a correlation between the performance of the application and a disk input/output (I/O) performance of a data storage device used by the application. When the correlation exceeds a threshold, the system outputs a recommendation to migrate the application between the data storage device and a different type of data storage device.Type: GrantFiled: January 14, 2016Date of Patent: February 6, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Zhenyun Zhuang, Sergiy Zhuk, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170336984Abstract: The disclosed embodiments provide a system for detecting and managing inefficiency in local storage. During operation, the system obtains a first snapshot of data in local storage of a computer system, wherein the first snapshot comprises a first set of data elements in the local storage at a first time. Next, the system applies a compression technique to the first snapshot to obtain a first set of inefficiency metrics for the first set of data elements. The system then outputs the first set of inefficiency metrics with additional attributes of the data to improve management of inefficiency in the data.Type: ApplicationFiled: May 20, 2016Publication date: November 23, 2017Applicant: LinkedIn CorporationInventors: John W. Nicol, Ritesh Maheshwari, Nicholas P. Baggott, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170336995Abstract: The disclosed embodiments provide a system for detecting and managing inefficiency in external services. During operation, the system obtains a snapshot of a data stream transmitted over an external service from a computer system at a first time. Next, the system applies a compression technique to the snapshot to obtain a set of inefficiency metrics for a set of data elements in the snapshot. The system then outputs the set of inefficiency metrics with additional attributes of the data stream to improve identification of inefficiency in the data stream.Type: ApplicationFiled: May 20, 2016Publication date: November 23, 2017Applicant: LinkedIn CorporationInventors: John W. Nicol, Ritesh Maheshwari, Nicholas P. Baggott, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Patent number: 9823875Abstract: A system, method, and apparatus are provided for performing a transparent hybrid data storage scheme in which data are stored as blocks distributed among one or more flash-based storage devices (e.g., solid state drives) and one or more magnetic storage devices (e.g., magnetic disk drives). Files larger than a given size (e.g., 1 MB) are segmented into blocks of that size and stored on one or more devices; blocks of one file may be stored on devices of different types. Periodically, a utility function calculates utility values for each of some or all stored blocks based on frequency of access to the block, frequency of access of a particular type (e.g., random, sequential), a preference regarding where to store the block or the corresponding file, and/or other factors. Blocks having the highest utility values are subject to migration between devices of different types and/or the same type (e.g., for load-balancing).Type: GrantFiled: August 31, 2015Date of Patent: November 21, 2017Assignee: LinkedIn CoporationInventors: Zhenyun Zhuang, Sergiy Zhuk, Haricharan K. Ramachandra, Cuong H. Tran, Badrinath K. Sridharan
-
Publication number: 20170270024Abstract: A system, apparatus, and methods are provided for managing on-demand profiling of one or more instances of a software application executing on a plurality of machines within one or more data centers. During operation, the system executes the one or more instances of the software application on the plurality of machines. Next, the system publishes, to a command channel, a command message that comprises a profiling request, wherein the profiling request specifies a subset of the machines. The system then receives, via a data channel, one or more data messages from the subset of the machines, wherein the data messages comprise data gathered by the subset of the machines in response to receiving the command message. Next, the system then evaluates the performance of the software application by aggregating and processing the data messages. Responsive to detecting an anomaly in the performance, the system then executes one or more remedies.Type: ApplicationFiled: March 21, 2016Publication date: September 21, 2017Applicant: LinkedIn CorporationInventors: John W. Nicol, Zhenyun Zhuang, Arman H. Boehm, Tao Feng, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170235608Abstract: The disclosed embodiments provide a method, apparatus, and system for obtaining user ratings and/or feedback for a software application. During operation, for each of a plurality of jobs executed by a computing system component, wherein each job includes an execution of a corresponding job definition: the system retrieves metadata about the job from the computing system component and calculates an inefficiency metric for the job based on the metadata, wherein a higher inefficiency metric corresponds to a more inefficient job. Next, the system ranks the plurality of jobs based on each job's inefficiency metric and selects one or more top-ranked jobs from the ranking. The system then selects one or more job definitions corresponding to the one or more top-ranked jobs. Next, the system sends optimization requests to users associated with the selected job definitions.Type: ApplicationFiled: February 16, 2016Publication date: August 17, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Christopher M. Coleman, Angela Andong Deng, Cuong H. Tran, Hans G. Granqvist, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170206015Abstract: The disclosed embodiments provide a system for analyzing data from a monitored system. During operation, the system identifies a difference between a performance of an application and a service-level agreement (SLA) of the application. Next, the system determines a correlation between the performance of the application and a disk input/output (I/O) performance of a data storage device used by the application. When the correlation exceeds a threshold, the system outputs a recommendation to migrate the application between the data storage device and a different type of data storage device.Type: ApplicationFiled: January 14, 2016Publication date: July 20, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Sergiy Zhuk, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170168955Abstract: The disclosed embodiments provide a system for processing data. During operation, the system obtains an attribute of a stack trace of a software program. Next, the system uses the attribute to select an address-translation instance from a set of address-translation instances for processing the stack trace. The system then provides the stack trace to the selected address-translation instance for use in translating a set of memory addresses in the stack trace into a set of symbols of instructions stored at the memory addresses.Type: ApplicationFiled: December 10, 2015Publication date: June 15, 2017Applicant: LinkedIn CorporationInventors: Arman H. Boehm, Anant R. Rao, Jui Ting Weng, Haricharan K. Ramachandra
-
Publication number: 20170168726Abstract: The disclosed embodiments provide a system for detecting and managing memory inefficiency in a software program. During operation, the system obtains a first snapshot of a heap for a software program, wherein the first snapshot includes a first set of objects stored in the heap at a first time. Next, the system applies a compression technique to the first snapshot to obtain a first set of inefficiency metrics for the first set of objects, wherein each inefficiency metric in the first set of inefficiency metrics represents a memory inefficiency of an object in the heap at the first time. The system then outputs the first set of inefficiency metrics with additional attributes of the first set of objects to improve identification of the memory inefficiency in the software program.Type: ApplicationFiled: December 15, 2015Publication date: June 15, 2017Applicant: LinkedIn CorporationInventors: John W. Nicol, Cuong H. Tran, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170060472Abstract: A system, method, and apparatus are provided for performing a transparent hybrid data storage scheme in which data are stored as blocks distributed among one or more flash-based storage devices (e.g., solid state drives) and one or more magnetic storage devices (e.g., magnetic disk drives). Files larger than a given size (e.g., 1 MB) are segmented into blocks of that size and stored on one or more devices; blocks of one file may be stored on devices of different types. Periodically, a utility function calculates utility values for each of some or all stored blocks based on frequency of access to the block, frequency of access of a particular type (e.g., random, sequential), a preference regarding where to store the block or the corresponding file, and/or other factors. Blocks having the highest utility values are subject to migration between devices of different types and/or the same type (e.g., for load-balancing).Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Sergiy Zhuk, Haricharan K. Ramachandra, Cuong H. Tran, Badrinath K. Sridharan
-
Publication number: 20170039086Abstract: A system, method, and apparatus are provided for determining an appropriate time to disrupt operation of a computer system, subsystem, or component, such as by shutting it down or taking it offline. Historical measurements of work accumulated on the component at different times are used to generate one or more forecasts regarding future amounts of work that will accumulate at different times. Accumulated work may include all job/tasks (or other executable objects) that have been initiated but not yet completed at the time the measurement is taken, and may be expressed in terms of execution time and/or component resources (e.g., cpu, memory). When a request is received to disrupt component operations, based on an urgency of the disruption a corresponding accumulated work threshold is chosen to represent the maximum amount of accumulated work that can be in process and still allow the disruption, and the disruption is scheduled accordingly.Type: ApplicationFiled: August 3, 2015Publication date: February 9, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Min Shen, Haricharan K. Ramachandra, Cuong H. Tran, Suja Viswesan, Badrinath K. Sridharan
-
Patent number: 9535843Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from a page cache. The managed memory cache may be managed according to predefined caching rules that are separate from the caching rules in the operating system that are used to manage the page cache, and these caching rules may be application-aware. Subsequently, when data for an application is accessed, the computer system may prefetch the data and associated information from disk and store the information in the managed memory cache based on data correlations associated with the application.Type: GrantFiled: February 17, 2015Date of Patent: January 3, 2017Assignee: LinkedIn CorporationInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
-
Publication number: 20160283480Abstract: A system, method, and apparatus are provided for assigning or allocating multiple content objects, within a content page (e.g., web page) or other content collection (e.g., a set of pages), to different content delivery networks for delivery in response to a content request. The objects are ranked by importance (e.g., importance in rendering or presenting the page), and the networks are ranked by performance (e.g., throughput). In order of importance, the objects are assigned to the best-performing network that is “available.” Some or all networks are initially available, and a given network becomes “unavailable” after it has been assigned its portion of the objects (e.g., based on content, number of objects, amount of data, percentage). If a total accumulated cost of delivering the objects exceeds a target before all objects have been allocated, the allocation process may terminate early and the remaining objects may be assigned to the least-expensive network.Type: ApplicationFiled: March 26, 2015Publication date: September 29, 2016Applicant: LINKEDIN CORPORATIONInventors: Zhenyun Zhuang, Ritesh Maheshwari, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20160239423Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from a page cache. The managed memory cache may be managed according to predefined caching rules that are separate from the caching rules in the operating system that are used to manage the page cache, and these caching rules may be application-aware. Subsequently, when data for an application is accessed, the computer system may prefetch the data and associated information from disk and store the information in the managed memory cache based on data correlations associated with the application.Type: ApplicationFiled: February 17, 2015Publication date: August 18, 2016Applicant: Linkedln CorporationInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
-
Publication number: 20160239432Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from an operating systems' default page cache. The managed memory cache may be managed according to predefined caching rules that are separate from rules used to manage the page cache. Moreover, at least one of the data entries in the managed memory cache may have a page size that is smaller than a minimum page size of the page cache. Furthermore, at least some of the data entries in the managed memory cache may have different page sizes and, more generally, different associated predefined caching rules.Type: ApplicationFiled: February 17, 2015Publication date: August 18, 2016Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran