Patents by Inventor Cuong H. Tran
Cuong H. Tran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10191657Abstract: The disclosed embodiments provide a system for detecting and managing memory inefficiency in a software program. During operation, the system obtains a first snapshot of a heap for a software program, wherein the first snapshot includes a first set of objects stored in the heap at a first time. Next, the system applies a compression technique to the first snapshot to obtain a first set of inefficiency metrics for the first set of objects, wherein each inefficiency metric in the first set of inefficiency metrics represents a memory inefficiency of an object in the heap at the first time. The system then outputs the first set of inefficiency metrics with additional attributes of the first set of objects to improve identification of the memory inefficiency in the software program.Type: GrantFiled: December 15, 2015Date of Patent: January 29, 2019Assignee: Microsoft Technology Licensing, LLCInventors: John W. Nicol, Cuong H. Tran, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Patent number: 9823875Abstract: A system, method, and apparatus are provided for performing a transparent hybrid data storage scheme in which data are stored as blocks distributed among one or more flash-based storage devices (e.g., solid state drives) and one or more magnetic storage devices (e.g., magnetic disk drives). Files larger than a given size (e.g., 1 MB) are segmented into blocks of that size and stored on one or more devices; blocks of one file may be stored on devices of different types. Periodically, a utility function calculates utility values for each of some or all stored blocks based on frequency of access to the block, frequency of access of a particular type (e.g., random, sequential), a preference regarding where to store the block or the corresponding file, and/or other factors. Blocks having the highest utility values are subject to migration between devices of different types and/or the same type (e.g., for load-balancing).Type: GrantFiled: August 31, 2015Date of Patent: November 21, 2017Assignee: LinkedIn CoporationInventors: Zhenyun Zhuang, Sergiy Zhuk, Haricharan K. Ramachandra, Cuong H. Tran, Badrinath K. Sridharan
-
Publication number: 20170235608Abstract: The disclosed embodiments provide a method, apparatus, and system for obtaining user ratings and/or feedback for a software application. During operation, for each of a plurality of jobs executed by a computing system component, wherein each job includes an execution of a corresponding job definition: the system retrieves metadata about the job from the computing system component and calculates an inefficiency metric for the job based on the metadata, wherein a higher inefficiency metric corresponds to a more inefficient job. Next, the system ranks the plurality of jobs based on each job's inefficiency metric and selects one or more top-ranked jobs from the ranking. The system then selects one or more job definitions corresponding to the one or more top-ranked jobs. Next, the system sends optimization requests to users associated with the selected job definitions.Type: ApplicationFiled: February 16, 2016Publication date: August 17, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Christopher M. Coleman, Angela Andong Deng, Cuong H. Tran, Hans G. Granqvist, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170168726Abstract: The disclosed embodiments provide a system for detecting and managing memory inefficiency in a software program. During operation, the system obtains a first snapshot of a heap for a software program, wherein the first snapshot includes a first set of objects stored in the heap at a first time. Next, the system applies a compression technique to the first snapshot to obtain a first set of inefficiency metrics for the first set of objects, wherein each inefficiency metric in the first set of inefficiency metrics represents a memory inefficiency of an object in the heap at the first time. The system then outputs the first set of inefficiency metrics with additional attributes of the first set of objects to improve identification of the memory inefficiency in the software program.Type: ApplicationFiled: December 15, 2015Publication date: June 15, 2017Applicant: LinkedIn CorporationInventors: John W. Nicol, Cuong H. Tran, Haricharan K. Ramachandra, Badrinath K. Sridharan
-
Publication number: 20170060472Abstract: A system, method, and apparatus are provided for performing a transparent hybrid data storage scheme in which data are stored as blocks distributed among one or more flash-based storage devices (e.g., solid state drives) and one or more magnetic storage devices (e.g., magnetic disk drives). Files larger than a given size (e.g., 1 MB) are segmented into blocks of that size and stored on one or more devices; blocks of one file may be stored on devices of different types. Periodically, a utility function calculates utility values for each of some or all stored blocks based on frequency of access to the block, frequency of access of a particular type (e.g., random, sequential), a preference regarding where to store the block or the corresponding file, and/or other factors. Blocks having the highest utility values are subject to migration between devices of different types and/or the same type (e.g., for load-balancing).Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Sergiy Zhuk, Haricharan K. Ramachandra, Cuong H. Tran, Badrinath K. Sridharan
-
Publication number: 20170039086Abstract: A system, method, and apparatus are provided for determining an appropriate time to disrupt operation of a computer system, subsystem, or component, such as by shutting it down or taking it offline. Historical measurements of work accumulated on the component at different times are used to generate one or more forecasts regarding future amounts of work that will accumulate at different times. Accumulated work may include all job/tasks (or other executable objects) that have been initiated but not yet completed at the time the measurement is taken, and may be expressed in terms of execution time and/or component resources (e.g., cpu, memory). When a request is received to disrupt component operations, based on an urgency of the disruption a corresponding accumulated work threshold is chosen to represent the maximum amount of accumulated work that can be in process and still allow the disruption, and the disruption is scheduled accordingly.Type: ApplicationFiled: August 3, 2015Publication date: February 9, 2017Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Min Shen, Haricharan K. Ramachandra, Cuong H. Tran, Suja Viswesan, Badrinath K. Sridharan
-
Patent number: 9535843Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from a page cache. The managed memory cache may be managed according to predefined caching rules that are separate from the caching rules in the operating system that are used to manage the page cache, and these caching rules may be application-aware. Subsequently, when data for an application is accessed, the computer system may prefetch the data and associated information from disk and store the information in the managed memory cache based on data correlations associated with the application.Type: GrantFiled: February 17, 2015Date of Patent: January 3, 2017Assignee: LinkedIn CorporationInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
-
Publication number: 20160239432Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from an operating systems' default page cache. The managed memory cache may be managed according to predefined caching rules that are separate from rules used to manage the page cache. Moreover, at least one of the data entries in the managed memory cache may have a page size that is smaller than a minimum page size of the page cache. Furthermore, at least some of the data entries in the managed memory cache may have different page sizes and, more generally, different associated predefined caching rules.Type: ApplicationFiled: February 17, 2015Publication date: August 18, 2016Applicant: LinkedIn CorporationInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
-
Publication number: 20160239423Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from a page cache. The managed memory cache may be managed according to predefined caching rules that are separate from the caching rules in the operating system that are used to manage the page cache, and these caching rules may be application-aware. Subsequently, when data for an application is accessed, the computer system may prefetch the data and associated information from disk and store the information in the managed memory cache based on data correlations associated with the application.Type: ApplicationFiled: February 17, 2015Publication date: August 18, 2016Applicant: Linkedln CorporationInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
-
Publication number: 20160210341Abstract: A system, methods, and apparatus are provided for performing capacity planning within a system that experiences high volumes of data having high velocity and high variability. Based on historical traffic, a forecast is generated for one or more relatively coarse time periods (e.g., weeks, days), and is decomposed to yield finer-grained forecasts (e.g., for hours, minutes) by applying a distribution index also generated from historical traffic. Estimated replication latency for the forecast period can be calculated from the traffic forecast and an expected level of replication capacity. Further, a required amount of replication capacity can be determined based on a traffic forecast and a maximum replication latency permitted by a service level agreement (SLA) of an event consumer. In addition, replication headroom can be computed, to identify a maximum level of traffic that can be sustained without violating an SLA and/or a date/time at which a violation may occur.Type: ApplicationFiled: January 28, 2015Publication date: July 21, 2016Applicant: LINKEDIN CORPORATIONInventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Cuong H. Tran, Subbu Subramaniam, Chavdar Botev, Chaoyue Xiong, Badrinath K. Sridharan
-
Patent number: 7743381Abstract: A computer accessible medium may be encoded with instructions which, when executed: replicate a checkpoint segment from a first local storage of a first node to at least one other node; and load a copy of the checkpoint segment from the other node to a second local storage of a second node. The checkpoint segment is stored into the first local storage by an application, and comprises a state of the application. The copy of the checkpoint segment is loaded into a second local storage responsive to a request from the second node to load the copy. The second node is to execute the application. In some embodiments, the copy of the checkpoint segment may also be loaded into a global storage.Type: GrantFiled: September 16, 2003Date of Patent: June 22, 2010Assignee: Symantec Operating CorporationInventor: Cuong H. Tran