Patents by Inventor Charles D. Garrett

Charles D. Garrett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8726255
    Abstract: Executable code may be recompiled so that generic portions of code may be replaced with specific portions of code. The recompilation may customize executable code for a specific use or configuration, making the code lightweight and executing faster. The replacement mechanism may replace variable names with fixed values, replace conditional branches with only those branches which are known to be executed, and may eliminate executable code portions that are not executed. The replacement mechanism may comprise identifying known values defined in the executable code for variables, and replacing those variables with the constant value. Once the constants are substituted, the code may be analyzed to identify branches that may be evaluated using the constant values. Those branches may be reformed using the constant value and the rest of the conditional code that may not be accessed may be removed.
    Type: Grant
    Filed: May 1, 2012
    Date of Patent: May 13, 2014
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Patent number: 8707326
    Abstract: Processes in a message passing system may be unblocked when messages having data patterns match data patterns of a function on a receiving process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue.
    Type: Grant
    Filed: July 17, 2012
    Date of Patent: April 22, 2014
    Assignee: Concurix Corporation
    Inventor: Charles D. Garrett
  • Patent number: 8694574
    Abstract: A set of optimizations may be defined in a configuration database. The configuration database may be defined with a set of boundaries that may define conditions under which the optimizations may be valid. When the conditions are not met, a new configuration database may be requested from an optimization server. The system may be used to distribute and manage optimizations for an application, which may be deployed in interpreted or runtime scenarios or in pre-execution or compiled scenarios.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: April 8, 2014
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Ying Li, Charles D. Garrett, Michael D. Noakes
  • Patent number: 8656134
    Abstract: A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed at runtime. An execution environment may capture a memory allocation boundary, look up the boundary in a configuration file, and apply the settings when the settings are available. When the settings are not available, a default set of settings may be used. The execution environment may deploy the optimized settings without modifying the executing code.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: February 18, 2014
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Ying Li, Charles D. Garrett, Michael D. Noakes
  • Patent number: 8656135
    Abstract: A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed prior to runtime. A compiler or other pre-execution system may detect a memory allocation boundary and decorate the code. During execution, the decorated code may be used to look up memory allocation and management settings from a database or to deploy optimized settings that may be embedded in the decorations.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: February 18, 2014
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Ying Li, Charles D. Garrett, Michael D. Noakes
  • Patent number: 8656378
    Abstract: Memoization may be deployed using a configuration file or database that identifies functions to memorize, and in some cases, includes input and result values for those functions. At compile time, functions defined in the configuration file may be captured and memoized. During compilation or other pre-execution analysis, the executable code may be modified or otherwise decorated to include memoization code. The memoization code may store results from a function during the first execution, then merely look up the results when the function may be called again. The memoized value may be stored in the configuration file or in another data store. In some embodiments, the modified executable code may operate in conjunction with an execution environment, where the execution environment may optionally perform the memoization.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: February 18, 2014
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Ying Li, Charles D. Garrett, Michael D. Noakes
  • Publication number: 20140026142
    Abstract: A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices.
    Type: Application
    Filed: September 20, 2013
    Publication date: January 23, 2014
    Applicant: Concurix Corporation
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20140013311
    Abstract: A bottleneck detector may use an iterative method to identify a bottleneck with specificity. An automated checkpoint inserter may place checkpoints in an application. When a bottleneck is detected in an area of an application, the first set of checkpoints may be removed and a new set of checkpoints may be placed in the area of the bottleneck. The process may iterate until a bottleneck may be identified with enough specificity to aid a developer or administrator of an application. In some cases, the process may identify a specific function or line of code where a bottleneck occurs.
    Type: Application
    Filed: April 18, 2013
    Publication date: January 9, 2014
    Inventors: Charles D. Garrett, Christopher W. Fraser
  • Patent number: 8607018
    Abstract: A computer software execution system may have a configurable memory allocation and management system. A configuration file or other definition may be created by analyzing a running application and determining an optimized set of settings for the application on the fly. The settings may include memory allocated to individual processes, memory allocation and deallocation schemes, garbage collection policies, and other settings. The optimization analysis may be performed offline from the execution system. The execution environment may capture processes during creation, then allocate memory and configure memory management settings for each individual process.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: December 10, 2013
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Ying Li, Charles D. Garrett, Michael D. Noakes
  • Patent number: 8595743
    Abstract: A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices.
    Type: Grant
    Filed: May 1, 2012
    Date of Patent: November 26, 2013
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20130298112
    Abstract: An operating system may be configured using a control flow graph that defines relationships between each executable module. The operating system may be configured by analyzing an application and identifying the operating system modules called from the application, then building a control flow graph for the configuration. The operating system may be deployed to a server or other computer containing only those components identified in the control flow graph. Such a lightweight deployment may be used on a large scale for datacenter servers as well as for small scale deployments on sensors and other devices with little processing power.
    Type: Application
    Filed: June 19, 2013
    Publication date: November 7, 2013
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Patent number: 8495598
    Abstract: An operating system may be configured using a control flow graph that defines relationships between each executable module. The operating system may be configured by analyzing an application and identifying the operating system modules called from the application, then building a control flow graph for the configuration. The operating system may be deployed to a server or other computer containing only those components identified in the control flow graph. Such a lightweight deployment may be used on a large scale for datacenter servers as well as for small scale deployments on sensors and other devices with little processing power.
    Type: Grant
    Filed: May 1, 2012
    Date of Patent: July 23, 2013
    Assignee: Concurix Corporation
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20130117753
    Abstract: A process scheduler for multi-core and many-core processors may place related executable elements that share common data on the same cores. When executed on a common core, sequential elements may store data in memory caches that are very quickly accessed, as opposed to main memory which may take many clock cycles to access the data. The sequential elements may be identified from messages passed between elements or other relationships that may link the elements. In one embodiment, a scheduling graph may be constructed that contains the executable elements and relationships between those elements. The scheduling graph may be traversed to identify related executable elements and a process scheduler may attempt to place consecutive or related executable elements on the same core so that commonly shared data may be retrieved from a memory cache rather than main memory.
    Type: Application
    Filed: May 1, 2012
    Publication date: May 9, 2013
    Applicant: CONCURIX CORPORATION
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20130117759
    Abstract: A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices.
    Type: Application
    Filed: May 1, 2012
    Publication date: May 9, 2013
    Applicant: CONCURIX CORPORATION
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20130081005
    Abstract: Optimized memory management settings may be derived from a mathematical model of an execution environment. The settings may be optimized for each application or workload, and the settings may be implemented per application, per process, or with other granularity. The settings may be determined after an initial run of a workload, which may observe and characterize the execution. The workload may be executed a second time using the optimized settings. The settings may be stored as tags for the executable code, which may be in the form of a metadata file or as tags embedded in the source code, intermediate code, or executable code. The settings may change the performance of memory management operations in both interpreted and compiled environments. The memory management operations may include memory allocation, garbage collection, and other related functions.
    Type: Application
    Filed: August 10, 2012
    Publication date: March 28, 2013
    Applicant: CONCURIX CORPORATION
    Inventors: Alexander G. Gounares, Ying Li, Charles D. Garrett
  • Publication number: 20130080760
    Abstract: An execution environment may have a monitoring, analysis, and feedback loop that may configure and tune the execution environment for currently executing workloads. A monitoring or instrumentation system may collect operational and performance data from hardware and software components within the system. A modeling system may create an operational model of the execution environment, then may determine different sets of parameters for the execution environment. A feedback loop may change various operational characteristics of the execution environment. The monitoring, analysis, and feedback loop may optimize the performance of a computer system for various metrics, including throughput, performance, energy conservation, or other metrics based on the applications that are currently executing. The performance model of the execution environment may be persisted and applied to new applications to optimize the performance of applications that have not been executed on the system.
    Type: Application
    Filed: August 10, 2012
    Publication date: March 28, 2013
    Applicant: CONCURIX CORPORATION
    Inventors: Ying Li, Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20130080761
    Abstract: An execution environment may have a monitoring, analysis, and feedback loop that may configure and tune the execution environment for currently executing workloads. A monitoring or instrumentation system may collect operational and performance data from hardware and software components within the system. A modeling system may create an operational model of the execution environment, then may determine different sets of parameters for the execution environment. A feedback loop may change various operational characteristics of the execution environment. The monitoring, analysis, and feedback loop may optimize the performance of a computer system for various metrics, including throughput, performance, energy conservation, or other metrics based on the applications that are currently executing. The performance model of the execution environment may be persisted and applied to new applications to optimize the performance of applications that have not been executed on the system.
    Type: Application
    Filed: August 10, 2012
    Publication date: March 28, 2013
    Applicant: CONCURIX CORPORATION
    Inventors: Charles D. Garrett, Ying Li, Alexander G. Gounares
  • Publication number: 20120324454
    Abstract: An operating system may be reconfigured during execution by adding new components to a control flow graph defining a system's executable flow. The operating system may use a control flow graph that defines executable elements and relationships between those elements. The operating system may traverse the control flow graph during execution to monitor execution flow and prepare executable elements for processing. By placing new components in memory then modifying the control flow graph, the operating system functionality may be updated or changed. In some embodiments, a lightweight version of an operating system may be deployed, then additional features or capabilities may be added.
    Type: Application
    Filed: May 4, 2012
    Publication date: December 20, 2012
    Applicant: CONCURIX CORPORATION
    Inventors: Alexander G. Gounares, Charles D. Garrett
  • Publication number: 20120317577
    Abstract: Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue.
    Type: Application
    Filed: July 17, 2012
    Publication date: December 13, 2012
    Applicant: CONCURIX CORPORATION
    Inventor: Charles D. Garrett
  • Publication number: 20120317557
    Abstract: Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue.
    Type: Application
    Filed: July 17, 2012
    Publication date: December 13, 2012
    Applicant: CONCURIX CORPORATION
    Inventor: Charles D. Garrett