Model Management System
A model management system receives data for an inventory of models including data regarding issues for each of the models. The model management system determines a model risk score for each of the models based on the issues for each of the models. As the issues are opened and closed, the model management system receives updates and in real-time, updates the model risk score. The model risk score can be determined using a severity assigned to each issue for each of the open issues for a given model as well as the number of issues at each severity. Risk scores can be calculated across a line of business and across an enterprise, for example. The model management system also provides one or more of: model development, model validation, model integration, model use, model maintenance, and model retirement.
Different types of enterprises employ one or more models to evaluate risk associated with various aspects of each enterprise's business dealings. Some types of businesses, such as financial institutions, health care organizations, and insurance institutions, are subject to governmental regulations. In those instances, government regulators periodically evaluate the risk positions to ensure compliance with regulatory law. Depending on the size of the enterprise, tens, hundreds or thousands of models may be simultaneously in use.
SUMMARYEmbodiments of the disclosure are directed to a model management system that can be implemented on an electronic computing device. In one aspect, an electronic computing device includes a processing unit and system memory, system memory including instructions that, when executed by the processing unit, cause the electronic computing device to receive data for an inventory of models including data regarding issues for each of the models, determine a model risk score for each of the models based on the issues for each of the models, receive an update as the issues are opened and closed, and update the model risk score after receiving the update.
In another aspect, a model management system includes an electronic computing device including a processing unit and system memory. The system memory includes instructions that, when executed by the processing unit, cause the electronic computing device to provide a model risk home page, populate the model risk home page with data regarding a plurality of models, receive an input selecting one of the plurality of models, receive a plurality of model risk score inputs including issues for the model, generate a model risk score based on the issues for the model, receive an update to one of the plurality of model risk score inputs, and generate an updated model risk score based on the update to one of the plurality of model risk score inputs.
In yet another aspect, a system for managing models includes a computer-readable, non-transitory data storage memory comprising instructions that, when executed by a processing unit of an electronic computing device, cause the processing unit to receive data for an inventory of models including a model itself, a list of stakeholders for the model, a model history, and a document related to the model and data regarding issues for each of the models; determine a model risk score for each of the models based on the issues for each of the models; receive an update as the issues are opened and closed, and update the model risk score after receiving the update. The data regarding issues for each of the models also includes a severity assigned to each issue, a designation of each issue as open or closed, and where the model risk score is based on the severity assigned to each open issue and the number of issues at each severity.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of these embodiments will be apparent from the description, drawings, and claims.
The following drawing figures, which form a part of this application, are illustrative of described technology and are not meant to limit the scope of the disclosure as claimed in any manner, which scope shall be based on the claims appended hereto.
Broadly, the present disclosure is directed to a centralized system for tracking and managing issues and exceptions for models, recording documentation associated with models, tracking and managing validation, and/or evaluating performance of models. The systems can provide a holistic view of risks associated with each of the models, as well as a holistic view of risks for all or a subset of the models.
A model is a quantitative mathematical or statistical construct used to simulate a complex view of real world events. One example of such a model is sometimes referred to as a quantitative tool and methodology (QTM). For example, a model may be constructed to predict credit losses in order to estimate exposure to credit risk based on existing or prospective extensions of credit. A model may also be used to simulate various economic scenarios to determine sufficiency of capital reserves. A financial institution may have a significant inventory of models and may need to track and manage risks associated with each model either individually or holistically. The risks may include, for example, risks associated with the ability of the model to perform its intended functions.
One aspect of the present disclosure centralizes and standardizes each model risk score across an entire enterprise. Another aspect provides insight into the necessary steps and current progress of model validation, sometimes known as validation pipeline management. Because models change and require validation and reviews, stakeholders and/or regulators may need to understand holistically what is being requested and when, and understand those aspects across the entire business line or model use type. The present disclosure provides a system enabling clear pipeline visibility about which models need to be reviewed and when, and provides that visibility across the entire enterprise.
Another aspect of the present disclosure provides efficient proof of the credibility challenges to models used across the enterprise. An additional aspect of the present disclosure defines a model classification process and a risk ranking qualification process. Yet another aspect of the present disclosure includes storing all documentation related to the models with the models in a centrally-accessible location.
Centralizing the aforementioned functionalities in a model management system improves, for example, the transactional efficiency of an enterprise's computers, saves memory usage, and reduces the quantity of computations performed by the enterprise's computers.
The environment 100 includes model management system users 102 that interact with the model management system 104 via a network 103. Network 103 can be any type of network, including a local area network (LAN), a wide area network (WAN), or the Internet. In this embodiment, the model management system users include a line of business (LOB) user 105 and a corporate model risk user 107. Other embodiment can include more or fewer components and different types of users.
Generally, the model management system 104 enables the generation, use, maintenance, and modification of models and model risk scores. Further, the model management system 104 can provide alerts to the LOB user 105 and/or the corporate model risk user 107 and produce documents related to regulatory compliance. These functions are described in more detail at least with reference to
Model management system 104 includes a model inventory 106 stored in one or more databases. The model inventory 106 includes all active and retired models, all information and documents/documentation relating to the model. Examples of information related to a model include: model number, model name, implementation date, last revision date, retirement/active status including relevant dates, owner identification, QTM uses, risk score, risk rank, version, performance status, author, creation date, modifications and users who modified the model, LOB, and others.
Model management system 104 includes document management 108. Document management 108 enables the storage of documents related to a model in a centralized location so that a model management system user 102 can access documents related to a model without navigating away from the model management system 104. That is, a model management system user 102 does not need to search other systems or share point sites in the enterprise to locate all documents associated with the model. Rather, the model management system user 102 can view and open the model documents from within the model management system 104.
Model management system 104 also includes workflows 110 that include one or more successive and/or parallel steps performed by enterprise personnel and/or the model management system 104. For example, the model management system 104 automates a model generation workflow (e.g., creating a new model and entering information relevant to the new model), a model approval workflow (e.g., facilitating review and approval of the model by multiple users across the enterprise that can be in different business groups), a validation issues tracking workflow (e.g., facilitating the review, tracking, and correction of any identified validation issues of the model), an LOB evaluation issues tracking workflow (e.g., facilitating the review, tracking and correction of issues specific to the line of business evaluation of a model), and an exceptions/conditions/restrictions tracking workflow (e.g., facilitating the entry of, and proper notification to relevant personnel about, exceptions, conditions, or restrictions relevant to a model). Automation includes guiding the users through the relevant processes, notifying other users about the progress of the workflow and whether their input is required, and storing and organizing all data and documents.
Model management system 104 also includes reporting 112 functionalities. For example, model management system 104 can produce canned reports, dashboard, and ad-hoc query capabilities. Model management system 104 supports compliance reporting, which includes preparing and producing reports for production to government regulation-compliance personnel. Model management system 104 also supports producing reports designed for internal use, such as reports showing an overview of the risk status of one or more models.
Reporting 112 includes performance monitoring. For example, the model management system 104 can store, create, and display performance-related information such as key performance indicators (KPI), risks such as data risk, implementation risk, use risk, and performance risk, and monitoring program stakeholder data. That is, the model management system 104 can group all monitoring programs and performance reviews associated with the QTM in the model view displayed by the model management system 104.
Performance monitoring can also include producing a performance rating. Producing the performance rating can involve, for example, receiving a list of KPIs for a model, testing the model against the KPIs, and providing a visual indication (e.g., green, yellow or red) of the performance of the model in addition to the model risk score. Generally, the green, yellow, and green visual indications correspond to continue, watch, or action required. When the rating is “action required,” an issue is created.
Reporting also includes real-time issue and severity trackers. In embodiments, one or more stakeholders are notified when something in the model management system 104 changes that affects an issue or a model risk score. When a user runs a report, such as viewing information related to a model's history or model risk score, the information is real-time. That is, for every live model in the model management system 104 that meets the report criteria, the data displayed to the user are current. Thus, a user does not need to track down documents or perform calculations to ensure the data in the report are accurate and up-to-date.
Some models in the model management system 104 can be in a suspended state. As part of one or more workflows, various stakeholders work with the model before it is published and released as a live model in the model management system 104. Each stakeholder can work on a model simultaneously and the model can be edited or modified. This is in contrast to previously-existing applications where each stakeholder would take turns doing their part of the model validation and once their work was entered, it could not be changed.
The model management system 104 enables a user, such as a validator, to start a validation activity and start a project. The user can upload documents to the model management system 104, create a report, identify issues, complete a model mandate, complete a model risk score, etc. Other stakeholders, such as a line of business owner, can go into the model in the model management system 104, discuss the model and review the model and associated documents. However, if a report was run in the model management system 104, the model is not available. Only published or finished models are available. This can be desirable because if the model is not finished, then the model should not have its model risk score included in reports or affect risk appetite. When the stakeholders agree that the work is finished, then the validator publishes the model and it is released into the model management system 104. Further, the model's issues and risk score are real-time and available.
Reporting 112 also includes a model risk rank generator. The model risk rank generator generates a model risk score report categorized by risk rank, for example, risk rank 1, risk rank 2, risk rank 3, and/or risk rank 4 QTMs. The model risk rank generator can export the model risk score report as a spreadsheet, word processing file, portable document format, etc.
Model management system 104 also provides user management 114 functionalities. Each LOB user 105 and corporate model risk user 107 accesses the model management system 104 through a web browser. Access to the model management system 104 can be limited to the use of the HTTPS protocol or a similarly secure portal, and can be limited to an intranet or open to Internet access. Model management system 104 restricts the levels of authority and provides different access permissions for the LOB users 105 and corporate model risk users 107.
Model management system 104 also includes model mandates 118. Generally, a model mandate 118 represents a distillation of information and analysis created through model development, validation, monitoring, and governance activities. A given model mandate 118 summarizes governance parameters applicable to a given model, which can include approved uses, identified issues, key individuals, monitoring metrics, and explicit limitations placed on the use of the model.
Model management system 104 presents model mandate 118 information in a concise, consistent and transparent format, which facilitates use by stakeholders across the enterprise. For example, the model management system 104 can provide information such as mandate creation date, reference number, validation activity (e.g., validation, revalidation, annual review), summary of model purpose, approved QTM uses, upstream and downstream models, limitations, restrictions, issues, model mandate completion date, model mandate completion author, comments, and links to an exportable report.
The embodiment of example model management system 104 authenticates model management system users 102 against a corporate active directory. Authorization is performed via role, grants, privileges, profiles, and resource limitations.
Model management system 104 includes an application that utilizes the model-view-controller architectural pattern to separate the different aspects of the model management system 104. Generally, a model-view-controller architectural pattern is used in graphical user interfaces to isolate the data representation (model) from what the user sees (view) and provide a way for the user to manipulate the data representation (controller). For example, the different aspects can include an input logic, a business logic, and a user-interface logic. This separation can help manage complexity and loose coupling can promote parallel development.
Model management system 104 also includes a model risk score generator 120. The model risk score generator 120 is used to generate a model risk score. Generally, the model risk score is an analytical method for measuring model risk. It is based on the severity and number of issues identified during the model validation process or performance monitoring review. Each issue identified as part of the model validation has a risk score assigned to it, where the risk score maps to the issue severity. An example implementation of the model risk score generator 120 is shown and described below with reference to
A given model can have multiple uses. For example, the model can be used in different LOBs. In those instances then, the model can have different model risk scores and model risk ranks depending for each of the more than one uses.
The example method 600 begins when the model management system 104 loads the model management system home page (operation 602). In some embodiments, the model management system 104 loads within an internet browser, such as Internet Explorer or Microsoft Edge, both by Microsoft®, running on a user's computing device. Loading the home page (operation 602) can be in response to a user initializing an executable file stored locally or by a user entering an address into a web browser's address tool.
An embodiment of an example model management system 104 home page 400 is shown in
After the model management system home page is loaded (operation 602), the model management system 104 next searches for quantitative tool and methodology files (QTMs) (operation 604) and populates a sortable list of QTMs. This searching can be user-initiated or automatically accomplished upon loading of the home page. Alternatively, a user can instruct the model management system 104 to create a new QTM, populate the required fields, and then continue with the example method 600.
Upon selection by a user, the model management system next loads a QTM (operation 606) and displays information associated with the QTM on a QTM home page. An embodiment of an example QTM home page 500 is shown in
The QTM home page also includes navigational buttons that cause the model management system 104 to display a QTM history, a list of documents with links to the actual documents, issues associated with the QTM, exceptions associated with the QTM, risk scores for the QTM, model mandates, performance monitoring, and projects.
When the QTM is loaded (operation 606) and a user selects a model risk score link, such as a tab shown in the interface, the model management system 104 loads model risk score inputs (operation 608). The model management system 104 can retrieve information used as inputs into the model risk score generation. Additionally, the model management system 104 can prompt the user for input, such as risk score adjustments to one or more categories and/or to the overall model risk score.
When the inputs are known, then the model management system 104 generates a model risk score (operation 610). The model risk score generator 120 calculates a model risk score using the formula I below:
model risk score (K)=model risk base (A)+model risk increment (B)+model risk adjustment (C) (Equation I)
The use of formula I will be illustrated with an example calculation of a model risk score for an issue in a model and for the overall model.
Example 1—Calculation of a Category Risk ScoreThe calculation of a category risk score uses a similar formula to formula I:
This is alternatively expressed as category risk score (R)=category base (X)+category increment (Y)+category adjustment (Z).
Each model risk score is based on the calculation of one or more category risk scores. In turn, the category risk scores are based on the issues associated with a category, where each model validation issue is assigned to an issue category. Generally, an issue is something that may impact the conceptual soundness of the model. Example issues include, for instance, an absence of consideration of certain data relevant to the model and a misapplication of an equation or formula. Other issues could surround the assumptions upon which a model is based if, for example, those assumptions change or are provide in accurate. Example categories include development data and inputs, conceptual soundness, implementation, ongoing monitoring and outcomes analysis, developmental evidence (such as documentation issues), governance and procedures, and usage.
Each model validation issue is assigned a severity which in turn is assigned a value. For example, the following Table 1 can be used to assign a value after determining the issue severity:
The category risk score calculation begins by determining the category base (X), where:
category base (X)=maxi∈I(scorei) (Equation III)
Effectively, the category base (X) determination ensures that the risk score for the set of issues in a category is not lower than the highest score (value) in the set. Thus, the first issue severity to contain an issue will be the category base (X). An example determination of category base (X) follows.
For a particular category, seven issues have been identified: 3 “issue severity 0” issues, 1 “issue severity 1” issue, 2 “issue severity 2” issues, and 1 “issue severity 3” issues. The category base (x) formula determines the maximum value of all those issues. Therefore, for this example, category base (x)=4.
Next, a category increment (Y) is determined, where:
The category increment (Y) is a number between [0, 1) which penalizes the models with a high number of issues, taking into account the severity of the issues. The category increment is calculated by first assigning a value to each issue severity count that acts as a multiplier to the number of issues at each issue severity level. In this example, the value of parameter u is set to u=0.1.
Continuing with the example, above, the Values in Table 2 are used as the multiplier:
The Multiple column shows the number of issues at a given issue severity multiplied by the value. Then the Total is the sum of the multiples at each issue severity. Here, the sum total of the multiples is 20. In terms of Formula (IV), 20 is sumi∈I(scorei) and 4 is maxi∈I(scorei). Substitution of all known parameters, followed by mathematical simplification, yields:
The category manual adjustment (Z) is an optional manual adjustment. In some models, a manual adjustment is needed to address the risks not directly captured by the issues. Generally, the manual adjustment reflects the holistic view of the model by observing inherent risks and mitigating factors. The category manual adjustment (Z) is limited numerically such that the risk score after adjustment is not outside of the range [0, 5).
Continuing the example, a manual adjustment of 0 is assigned to this category. Thus, the category risk score (R)=4+0.4+0=4.4. The calculation of the category risk score (R) is performed for each Issue Category.
Example 2—Calculating a Total Model Risk ScoreExample 1 provided an example calculation of one category risk score. Many models will include more than one category. In those instances, a single category risk score would not describe the overall model risk score. Example 2 builds off of the category risk score described in Example 1. Here, the model risk score has been calculated for seven categories, shown below in Table 3.
The following formula (V) is used to calculate the overall model risk score:
which can be also written as model risk score (K)=model risk base (A)+model risk increment (B)+model risk adjustment (C).
First the number of issues at each issue severity is summed. Thus, for this example model, there are 0 “issue severity 0” issues, 0 “issue severity 1” issues, 4 “issue severity 2” issues, and 15 “issue severity 3” issues. These summations are used in the calculation of formula (V).
The model risk base (A) calculation uses the Table 1 values from Example 1 above. The max score for model risk base (A) is the highest Value from Table 1. In this example, there are no “issue severity 0” issues and no “issue severity 1” issues, only “issue severity 2” and “issue severity 3” issues. Thus, the highest value of all issue severities is 2 and model risk base (A)=2.
Next, the model risk increment (B) is calculated using formula (VI) below:
The sumi∈I(scorei) term is calculated by multiplying the number of issues at each issue severity by the multiplier, and then summing those multiples. This is shown below in Table 4.
Thus, the sumi∈I(scorei) is 23. As noted above, the max score is 2. Thus, substituting all known values into equation (VI) and simplifying yields:
Therefore, the model risk score increment (B) is 0.9.
A manual adjustment is needed in some circumstances to address risks not directly captured by the issues. As noted above in Example 1, generally, the manual adjustment reflects the holistic view of the model by observing inherent risks and mitigating factors. The category manual adjustment (Z) is limited numerically such that the risk score after adjustment is not outside of the range [0, 5).
In this example, the model risk score manual adjustment (Z) is set to −0.1.
Now all components of the model risk score calculation are known. The model risk score (R)=2+0.9+−0.1=2.8.
Then the model management system 104 performs one or more error checks. For example, the model management system 104 ensures that: the risk score for the set of issues is not lower than the highest score in the set; the model risk score increment is not outside [0, 0.9); the manual adjustment for an issue category cannot change the model risk score to be outside the range of [0, 5); the risk score calculation goes to one decimal place.
If the model risk score increment is less than 0, then the model management system 104 sets the model risk score increment equal to 0. If the model risk score increment is greater than 0.9, then the model management system 104 sets the model risk score increment equal to 0.9. If the model risk score is outside the range of [0, 5), then the model management system 104 prompts a warning to the user that the model risk score exceeds the boundaries of the model risk score. In that instance, the user can readjust the model risk score adjustment (Z) and proceed with the creation of a model risk score.
The model management system 104 also receives model updates (operation 612) after the model risk score is generated (operation 610). During the model's life, issues for a particular model are opened and/or closed in the model management system 104. As the model management system 104 receives updates, new model risk scores are generated (operation 610) in real-time.
Example model life cycle 700 begins by developing a model (operation 702). Developing a model (operation 702) includes the model management system 104 prompting the user to create or add model requirements and establish a model design. An example user at this stage is a QTM owner. Then the model management system 104 prompts for, receives, and stores data about the model, including various documents associated with the model and testing documents. Developing a model (operation 702) also includes constructing the model using the entered information, testing the model, and documenting the model.
After the model is developed (operation 702), the model is validated (operation 704). Here, the model management system 104 facilitates the review of documentation by one or more stakeholders by centrally housing, sorting, and displaying all documents related to the model on the model's home page 500. The model management system 104 also advantageously provides access to tracking tools for issues, exceptions, conditions, and restrictions.
During model validation, if the model is rejected, the workflow returns to model development (operation 702). If the model validation is approved, then the model management system 104 stores validation data in the model inventory, sends a request for governing body model approval, and generates a model mandate.
After the model is validated (operation 704) and approved for implementation through the workflow within the model management system 104, the model is implemented (operation 706). Model implementation (operation 706) includes the model management system 104 notifying, and receiving verification of completion from, one or more model stakeholders. These model stakeholders can, via the model management system, integrate and maintain source code for the particular model, perform various integration tests, and release the model.
After the model is implemented (operation 706), the model is used (operation 708). Model use includes the model management system 104 running the model and analyzing the output of the model. A stakeholder using the model management system 104 can determine, based on the model use, the usefulness of the model output and apply risk mitigation strategies where needed. The model output is used by various stakeholders and the model management system 104 provides ongoing performance monitoring.
Throughout the model use (operation 708), the model can be maintained (operation 710). The model management system 104 can prompt stakeholders to review the model on a regular basis, such as semi-annually or annually, and/or to review the model based on the model's current performance. The fact that models change and require validation and reviews creates a need for stakeholders to understand what is requested of them and when across an entire line of business, a model use type, or enterprise. The model management system 104 provides pipeline visibility for the various stakeholders/users about what needs to be reviewed and when. Thereby, the model management system 104 facilitates a process dictated by policy requirements and provides notice to stakeholders when a review is not complete and, thus, out of policy.
The model management system 104 enables stakeholders to change the model, test the changes, modify documents related to the model, approve the changes, log the changes, and implement the changes to the model.
At some point, the model is retired (operation 712). During model retirement (operation 712), the model management system 104 notifies the relevant stakeholders to approve the model for retirement. Upon receiving approval, the model management system 104 unplugs the model from the system and marks the model as retired. After retirement, the model is no longer active in the model management system 104, is not used in risk determinations, and does not appear in QTM searches for active models.
The mass storage device 814 is connected to the CPU 802 through a mass storage controller (not shown) connected to the system bus 822. The mass storage device 814 and its associated computer-readable data storage media provide non-volatile, non-transitory storage for the example server 801. Although the description of computer-readable data storage media contained herein refers to a mass storage device, such as a hard disk or solid state disk, it should be appreciated by those skilled in the art that computer-readable data storage media can be any available non-transitory, physical device or article of manufacture from which the central display station can read data and/or instructions.
Computer-readable data storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable software instructions, data structures, program modules or other data. Example types of computer-readable data storage media include, but are not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROMs, digital versatile discs (“DVDs”), other optical storage media, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the example server 801.
According to various embodiments of the invention, the example server 801 may operate in a networked environment using logical connections to remote network devices through the network 820, such as a wireless network, the Internet, or another type of network. The example server 801 may connect to the network 820 through a network interface unit 804 connected to the system bus 822. It should be appreciated that the network interface unit 804 may also be utilized to connect to other types of networks and remote computing systems. The example server 801 also includes an input/output controller 806 for receiving and processing input from a number of other devices, including a touch user interface display screen, or another type of input device. Similarly, the input/output controller 806 may provide output to a touch user interface display screen or other type of output device.
As mentioned briefly above, the mass storage device 814 and the RAM 810 of the example server 801 can store software instructions and data. The software instructions include an operating system 818 suitable for controlling the operation of the example server 801. The mass storage device 814 and/or the RAM 810 also store software instructions, that when executed by the CPU 802, cause the example server 801 to provide the functionality of the example server 801 discussed in this document. For example, the mass storage device 814 and/or the RAM 810 can store software instructions that, when executed by the CPU 802, cause the example server 801 to display received data on the display screen of the example server 801.
Although various embodiments are described herein, those of ordinary skill in the art will understand that many modifications may be made thereto within the scope of the present disclosure. Accordingly, it is not intended that the scope of the disclosure in any way be limited by the examples provided.
Claims
1. A computer-implemented method, comprising:
- determining a model risk score for a model based on issues for the model, the issues having issue scores, the model being a quantitative mathematical or statistical construct for simulating events;
- determining a highest issue score of the issue scores;
- setting a model risk base to a value equal to the highest issue score;
- calculating a model risk increment based on an amount of the issues and a severity of the issues;
- updating the model risk score based on the model risk base, the model risk increment, and a model risk adjustment to provide an updated model risk score;
- while the model is in a suspended state, making modifications to the model as simultaneous modifications to the model are being made by multiple stakeholders to provide a modified model; and
- releasing the modified model, causing the modified model to change from the suspended state to a live state and causing the updated model risk score or a modified version of the updated model risk score to be accessible via a user interface.
2. The computer-implemented method of claim 1, wherein while the model is in the suspended state, the model is unavailable for use in determination of another risk score for another model that is in the live state.
3. The computer-implemented method of claim 1, further comprising:
- determining a performance rating for the model by: using one or more key performance indicators; and testing the one or more key performance indicators against the model.
4. The computer-implemented method of claim 1, further comprising:
- providing a notification to a stakeholder associated with the model when there is a change in one of the issues for the model.
5. The computer-implemented method of claim 1, wherein the model is for predicting credit losses to estimate exposure to credit risk based on existing or prospective extensions of credit.
6. The computer-implemented method of claim 1, wherein the model is for simulating economic scenarios to determine sufficiency of capital reserves.
7. The computer-implemented method of claim 1, wherein the issues include an absence of consideration of certain data relevant to the model.
8. The computer-implemented method of claim 1, wherein the issues include a misapplication of an equation or formula.
9. The computer-implemented method of claim 1, wherein the issues include a change to an assumption on which the model is based.
10. The computer-implemented method of claim 1, wherein the issues include an inaccuracy in an assumption on which the model is based.
11. The computer-implemented method of claim 1, wherein the issues include a validation issue.
12. The computer-implemented method of claim 1, wherein the issues include a line of business evaluation issue.
13. An electronic computing device, comprising:
- a processing unit; and
- system memory, the system memory including instructions that, when executed by the processing unit, cause the processing unit to: determine a model risk score for a model based on issues for the model, the issues having issue scores, the model being a quantitative mathematical or statistical construct for simulating events; determine a highest issue score of the issue scores; set a model risk base to a value equal to the highest issue score; calculate a model risk increment based on an amount of the issues and a severity of the issues; update the model risk score based on the model risk base, the model risk increment, and a model risk adjustment to provide an updated model risk score; while the model is in a suspended state, make modifications to the model as simultaneous modifications to the model are being made by multiple stakeholders to provide a modified model; and release the modified model, causing the modified model to change from the suspended state to a live state and causing the updated model risk score or a modified version of the updated model risk score to be accessible via a user interface.
14. The electronic computing device of claim 13, wherein while the model is in the suspended state, the model is unavailable for use in determination of another risk score for another model that is in the live state.
15. The electronic computing device of claim 13, wherein the system memory further includes instructions that, when executed by the processing unit, cause the electronic computing device to:
- determining a performance rating for the model by: using one or more key performance indicators; and testing the one or more key performance indicators against the model.
16. The electronic computing device of claim 13, wherein the system memory further includes instructions that, when executed by the processing unit, cause the electronic computing device to:
- provide a notification to a stakeholder associated with the model when there is a change in one of the issues for the model.
17. The electronic computing device of claim 13, wherein the issues include an absence of consideration of certain data relevant to the model.
18. The electronic computing device of claim 13, wherein the issues include a misapplication of an equation or formula.
19. The electronic computing device of claim 13, wherein the issues include a change to an assumption on which the model is based.
20. The electronic computing device of claim 13, wherein the issues include an inaccuracy in an assumption on which the model is based.
Type: Application
Filed: Mar 16, 2023
Publication Date: Aug 3, 2023
Inventors: Tapan Shah (Charlotte, NC), Steve Cardinale (Charlotte, NC), Casey A. Bennett (Charlotte, NC), John V. Hintze (Charlotte, NC), Jason Hilliard (Charlotte, NC), Simon Cann (Charlotte, NC), Kevin D. Oden (Charlotte, NC)
Application Number: 18/185,112