Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting

A method of determining loss reserves and/or providing automatic financial reporting related thereto via one or more processors includes (1) receiving a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount; (2) normalizing each respective claim loss amount and training an artificial intelligence or machine learning algorithm, module, or model, such as an artificial neural network, by applying the plurality of electronic claim documents to the artificial intelligence or machine learning algorithm, module, or model. The method may include receiving a user claim and predicting a loss reserve amount by applying the user claim to the trained artificial intelligence or machine learning algorithm, module, or model, and may include unreported claims.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to and the benefit of:

    • U.S. Application No. 62/564,055, filed Sep. 27, 2017 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
    • U.S. Application No. 62/580,655, filed Nov. 2, 2017 and entitled REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
    • U.S. Application No. 62/610,599, filed Dec. 27, 2017 and entitled “AUTOMOBILE MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
    • U.S. Application No. 62/621,218, filed Jan. 24, 2018 and entitled. “AUTOMOBILE :MONITORING SYSTEMS ANI) METHODS FOR LOSS MITIGATION AND CLAIMS HANDLING;”
    • U.S. Application No. 62/621.797, filed Jan. 25, 2018 and entitled “AUTOMOBILE MONITORING SYSTEMS AND METHODS FOR LOSS RESERVING AND FINANCIAL REPORTING;”
    • U.S. Application No. 62/580,713, filed Nov. 2, 2017 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
    • U.S. Application No. 62/618,192, filed Jan. 17, 2018 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER. CONDITIONS,”
    • U.S. Application No. 62/625,140, filed Feb. 1, 2018 and entitled “SYSTEMS AND METHODS FOR ESTABLISFIING LOSS RESERVES FOR BUILDING/REAL PROPERTY INSURANCE;”
    • U.S. Application No. 62/646,729, filed Mar. 22, 2018 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR LOSS MITIGATION AND CLAIMS HANDLING;”
    • U.S. Application No. 62/646,735, filed Mar. 22, 2018 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR RISK DETERMINATION,”
    • U.S. Application No. 62/646,740, filed Mar. 22, 2018 and entitled “SYSTEMS AND METHODS FOR ESTABLISHING LOSS RESERVES FOR BUILDING/REAL PROPERTY INSURANCE;”
    • U.S. Application No. 62/617,851, filed Jan. 16, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE PRICING AND UNDERWRITING;”
    • U.S. Application No. 62/622,542, filed Jan. 26, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH: INSURANCE LOSS MITIGATION AND CLAIMS HANDLING;”
    • U.S. Application No. 62/632,884, filed Feb. 20, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE LOSS RESERVING AND FINANCIAL REPORTING;”
    • U.S. Application No. 62/652,121, filed Apr. 3, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE CLAIMS HANDLING;”

the entire disclosures of which are hereby incorporated by reference herein in their entireties.

FIELD OF INVENTION

This disclosure generally relates to detecting damage, loss, injury and/or other conditions associated with an automobile using a computer system using an automobile monitoring system; and for processing, estimating, and optimizing loss reserves and financial reporting.

BACKGROUND

As computer and computer networking technology has become less expensive and more widespread, more and more devices have started to incorporate digital “smart” functionalities. :For example, controls and sensors capable of interfacing with a network may now be incorporated into devices such as vehicles. Furthermore, it is possible for one or more vehicle and/or central controllers to interface with the smart devices or sensors.

However, conventional systems may not be able to automatically detect and characterize various conditions (e.g., damage, injury, etc.) associated with a vehicle and/or the vehicle's occupants, occupants of other vehicles, and/or pedestrians. Additionally, conventional systems may not be able to detect or sufficiently identify and describe damage that is hidden from human view, and that typically has to be characterized by explicit human physical exploration, extent and range of electrical malfunctions, etc. Conventional systems may not be able to formulate precise characterizations of loss without including unconscious biases, and may not be able to equally weight all historical data in determining loss reserving estimates.

In general, “loss reserves” may be funds that may be pre-allocated by an insurer or other company (e.g., a mutual insurance company or capital stock insurance company) to offset known or anticipated losses. Some level of disclosure of loss reserves may be required (e.g., by public statute or contractual bylaws). Disclosure of loss reserves may be issued periodically (e yearly) and may be included in a financial report such as an annual report, shareholder report (e.g., S.E.C. form 10-K), or other statement.

Accurate loss reserves prediction historically may be a manual process in which actuaries or other financial scientists manually review claims and make guesses as to the final loss amounts associated with those claims. This manual process may be influenced by significant error margin due to human inexperience, limitations of recollection, bias, and other frailties. Accurate loss reserving practice may be very difficult to get right, and may have serious consequences for companies that neglect to do it properly. Underestimation of loss reserves may cause a company to believe that it has adequate capitalization, when in reality, it does not. Once the final loss amounts become known, the liquidity of the company may be negatively affected. On the other hand, overestimation of loss reserves may cause a company to set aside funds in excess of the necessary capital reserves. Doing so may prevent the company from using the capital for other purposes. Conventional techniques may have other drawbacks as well.

BRIEF SUMMARY

The present disclosure generally relates to systems and methods for detecting damage, loss, injury and/or other conditions associated with a vehicle using a computer system; and methods and systems for processing, estimating, and optimizing loss reserving and financial reporting obligations. Embodiments of exemplary systems and computer-implemented methods are summarized below. The methods and systems summarized below may include additional, less, or alternate components, functionality, and/or actions, including those discussed elsewhere herein.

In one aspect, a computer-implemented method of determining loss reserves is provided. The method may include receiving a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount, normalizing each respective claim loss amount, and training an artificial neural network (or other artificial intelligence or machine learning algorithm, program, module or model) by applying the plurality of electronic claim documents to the artificial neural network. The method may further include receiving a user claim and predicting a loss reserve amount by applying the user claim to the trained artificial neural network (or other artificial intelligence or trained machine learning algorithm, program, module, or model).

In another aspect computing system having one or more processor and one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to receive a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount, normalize each respective claim loss amount, and train an artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) by applying the plurality of electronic claim documents to the artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model). The instructions may further cause the computing system to receive a user claim and predict a loss reserve amount by applying the user claim to the trained artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model).

Advantages will become more apparent to those skilled in the art from the following description of the preferred embodiments which have been shown and described by way of illustration. As will be realized, the present embodiments may be capable of other and different embodiments, and their details are capable of modification in various respects. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.

BRIEF DESCRIPTION OF THE DRAWINGS

The Figures described below depict various aspects of the system and methods disclosed therein. It should be understood that each Figure depicts one embodiment of a particular aspect of the disclosed system and methods, and that each of the Figures is intended to accord with a possible embodiment thereof. Further, wherever possible, the following description refers to the reference numerals included in the following Figures, in which features depicted in multiple Figures are designated with consistent reference numerals.

There are shown in the drawings arrangements which are presently discussed, it being understood, however, that the present embodiments are not limited to the precise arrangements and instrumentalities shown, wherein:

FIG. 1 depicts an exemplary computing environment in which techniques for training a neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) to determine a loss reserve associated with a vehicle and/or vehicle operator may be implemented, according to one embodiment;

FIG. 2 depicts an exemplary computing environment in which techniques for collecting and processing user input, and training a neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) to determine loss reserving and financial reporting information may be implemented, according to one embodiment;

FIG. 3 depicts an exemplary artificial neural network which may be trained by the neural network unit of FIG. 1 or the neural network training application of FIG. 2, according to one embodiment and scenario;

FIG. 4 depicts an exemplary neuron, which may be included in the artificial neural network of FIG. 3, according to one embodiment and scenario;

FIG. 5 depicts text-based content of an exemplary electronic claim record that may be processed by an artificial neural network, in one embodiment;

FIG. 6 depicts a flow diagram of an exemplary computer-implemented method of determining a risk level posed by an operator of a vehicle, according to one embodiment;

FIG. 7 depicts a flow diagram of an exemplary computer-implemented method of identifying risk indicators from vehicle operator information, according to one embodiment;

FIG. 8 is a flow diagram depicting an exemplary computer-implemented method of detecting and/or estimating damage to personal property, according to one embodiment;

FIG. 9A is an example flow diagram depicting an exemplary computer-implemented method of determining damage to personal property, according to one embodiment;

FIG. 9B is an example data flow diagram depicting an exemplary computer-implemented method of determining damage to an insured vehicle using a trained machine learning algorithm (or other artificial intelligence or machine learning algorithm, program, module, or model) to facilitate handling an insurance claim associated with the damaged insured vehicle, according to one embodiment;

FIG. 10A. is an example flow diagram depicting an exemplary computer-implemented method for determining damage to personal property, according to one embodiment;

FIG. 10B is an example data flow diagram depicting an exemplary computer-implemented method of determining damage to an undamaged insurable vehicle using a trained machine learning algorithm (or other artificial intelligence or machine learning algorithm, program, module, or model) to facilitate generating an insurance quote for the undamaged insurable vehicle, according to one embodiment;

FIG. 11 depicts an example loss reserving user interface, in which a user may train and/or operate a neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) using a customized data set, according to one embodiment and scenario;

FIG. 12 depicts a flow diagram of an exemplary computer-implemented method of determining loss reserves, according to one embodiment; and

FIG. 13 depicts a flow diagram of an exemplary computer-implemented method of executing a trained artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) and data set, and displaying the result of such execution, according to one embodiment.

The Figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein.

DETAILED DESCRIPTION Artificial Intelligence Systems for Insurance

The present embodiments are directed to, inter alia, machine learning and/or training of models using historical automobile claim data to determine optimal loss reserving amounts and financial reporting information. Systems and methods may include natural language processing of free-form notes/text, or free-form speech/audio, recorded by call center and/or claim adjustor, photos, and/or other evidence. The free-form text and/or free-form speech may also be received from a customer who is inputting the text or speech into a mobile device app or into a smart vehicle controller, and/or into a chat bot or robo-advisor.

Other inputs to a machine learning/training model may be harvested from historical claims, and may include make, model, year, miles, technological features, and/or other characteristics of a vehicle including any software updates that have been applied to the vehicle (including versions thereof), claim paid or not paid, liability (e.g., types of injuries, where treated, how treated, etc.), disbursements related to claim such as rental costs and other payouts, etc. Additional inputs to the machine learning/training model may include vehicle telematics data, such as how long and when are the doors unlocked, how often is the security system armed, how long the vehicle is operated and/or during which times of the day, etc.

Vehicle inspection and/or maintenance records may be of particular interest in some embodiments, as being highly correlated to vehicle malfunction. :A driver's history may also be used as inputs to artificial intelligence or machine learning algorithms or models in some embodiments, including without limitation, the driver's age, number and type of moving violations and any fines associated therewith, etc.

As noted above, “loss reserves” are amounts of capital set aside in advance of claim settlement, and in some cases, prior to the filing of claims. For example, an insurer may learn via statistical analysis that a given amount of broken-windshield claims may occur each year, and may be able to derive a claim payout average. With this information, the insurer may be able to extrapolate the amount of loss reserves that should be set aside for the next year's worth of broken-windshield claims. However, a model that is specific to broken-windshield claims will not provide the insurer with information related to other claim types, and may not be very accurate. The methods and systems herein may be used to build general models for determining loss reserves taking into account decades worth of historical claims information, which may appear counter-intuitive to human observers.

Artificial Intelligence System for Vehicle: Insurance

The present embodiments may also be directed to machine learning and/or training a model using historical auto claim data to discover loss reserving data. The present embodiments may include natural language processing of free-form notes recorded by call center and/or claim adjustor (e.g., “hit a deer”, “surgery”, “hospital”, etc.), photos, and/or other evidence to use as input to machine learning/training model. Other inputs to a machine learning/training model may be harvested from historical claims, and may include make, model, year, claim paid or not paid, liability e..g, types of injuries, where treated, how treated, etc.), disbursements related to the claim such as rental car and other payouts, etc. It should be appreciated that the inputs to the trained model may be very complex and may include many (e.g., millions) of inputs. For example, a single network may base an amount of loss reserve on a zip code, medical diagnosis, treatment plan, age of injured person, gender of injured person, point of impact, G-forces at impact, air bag deployment(s), striking vehicle weight and/or size, etc. Many more, or fewer, inputs may be included in some embodiments. The presence or absence of autonomous vehicle features may be determined with respect to historical auto claims, and may be used in the training process.

Exemplary Environment for Determining Loss Reserves and Financial Reporting in Data

The embodiments described herein may relate to, inter cilia, determining one or more loss reserves, The embodiments described herein may also relate to financial reporting. Different loss reserve amounts may be generated by separate models examining a set of inputs, in some embodiments. In some embodiments, one or more neural network models (or other artificial intelligence or machine learning algorithms, programs, modules, or models) may be trained using a subset of historical claims data as training input. A separate subset of the historical claims data may be used for validation and cross-validation purposes. An application may be provided to a client computing device (e.g., a smartphone, tablet, laptop, desktop computing device, wearable, or other computing device) of a user. A user of the application, who may be an employee of a company employing the methods described herein or a customer of that company, may enter input into the application via a user interface or other means. The input may be transmitted from the client computing device to a remote computing device (e.g., one or more servers) via a computer network, and then processed further, including by applying input entered into the client to the one or more trained neural network models (or other artificial intelligence or machine learning algorithms, programs, modules, or models) to produce labels and weights indicating net or individual risk factors, based upon existing claim data.

For example, the remote computing device may receive the input and determine, using a trained neural network (or other artificial intelligence or machine learning algorithm, program, module, or model), one or more loss reserve amounts applicable to the input. Herein loss reserve amounts may be expressed numerically, as strings (e.g., as labels), or in any other suitable format. Loss reserves may be expressed as a dollar amount, or as a multiplier with respect to a past amount (e.g., 1.2 or 120%). The determined loss reserve amounts may be displayed to an end user, or an employee or owner of a business utilizing the methods and systems described herein. Similarly, the loss reserve amounts may be provided as input to another application (e.g., to an application which provides the loss reserve amounts to an end user). The loss reserve amounts may be joined with other information (e.g., claim category) and may be formatted (e.g., in a table or other suitable format) and automatically inserted in a financial report, such as a PDF file.

A loss reserve aggregate may include one or more loss reserve amounts respective of a claim category or subtype, and may include a gross or net reserve amount. For example, an “automobile” loss reserve aggregate may be created which predicts a $1 m total loss reserve. This aggregate may include a plurality of categorical loss reserve, which may themselves be aggregate loss reserves or individual loss reserves. For example, the automobile loss reserve aggregate may include a “motorcycle” loss reserve and a “passenger car” loss reserve, wherein the motorcycle loss reserve may be associated with an amount of ($2 m) and the passenger car loss reserve may be associated with an amount of 53.2 m. Herein, parentheses may be used to denote negative loss reserves (e.g., shortfalls) and lack of parentheses may be used to denote positive loss reserves (e.g., surpluses). The gross amount of the automobile (i.e., combined motorcycle and passenger car) loss reserves may be $1.2 m. An amount of money may be deducted from the gross amount for miscellaneous expenses associated with the automobile loss reserve and/or constituent loss reserves (e.g., audit fees, storage fees, etc.) to arrive at a net loss reserve amount. Gross and/or net loss reserves may be included in financial reports.

It should be appreciated that the fully automated and dynamic learning methods and systems described herein may estimate loss reserves not only for insurance claims that have been reported but also for claims that have occurred but have not yet been reported or recorded, as may be required by actuarial standards of practice and/or applicable law.

Turning to FIG. 1, an exemplary computing environment 100, representative of automobile monitoring systems and methods for loss reserve determination and financial reporting, is depicted. Environment 100 may include input data 102 and historical data 108, both of which may comprise a list of parameters, a plurality e..g, thousands or millions) of electronic documents, or other information. As used herein, the term “data” generally refers to information related to a vehicle operator, which exists in the environment 100. For example, data may include an electronic document representing a vehicle (e,g, automobile, truck, boat, motorcycle, etc.) insurance claim, demographic information about the vehicle operator and/or information related to the type of vehicle or vehicles being operated by the vehicle operator, and/or other information.

Data may be historical or current. Although data may be related to an ongoing claim filed by a vehicle operator, in some embodiments, data may consist of raw data parameters entered by a human user of the environment 100 or which is retrieved/received from another computing system. Data may or may not relate to the claims filing process, and while some of the examples described herein refer to auto insurance claims, it should be appreciated that the techniques described herein may be applicable to other types of electronic documents, in other domains. For example, the techniques herein may be applicable to determining loss reserves and generating financial reports in other insurance domains, such as agricultural insurance, homeowners insurance, health or life insurance, renters insurance, etc. In that case, the scope and content of the data may differ.

As another example, data may be collected from an existing customer filing a claim, a potential or prospective customer applying for an insurance policy, or may be supplied by a third party such as a company other than the proprietor of the environment 100. In some cases, data may reside in paper files that are scanned or entered into a digital format by a human or by an automated process (e.g., via a scanner). Generally, data may comprise any digital information, from any source, created at any time.

Input data 102 may be loaded into an artificial intelligence system 104 to organize, analyze, and process input data 102 in a manner that facilitates determining optimal loss reserves via a loss reserve aggregation platform 106. The loading of input data 102 may be performed by executing a computer program on a computing device that has access to the environment 100, and the loading process may include the computer program coordinating data transfer between input data 102 and AI platform 104 (e.g., by the computer program providing an instruction to AI platform 104 as to an address or location at which input data 102 is stored). AI platform may reference this address to retrieve records from input data 102 to perform loss reserves determinations. AI platform 104 may be thought of as a collection of algorithms configured to receive and process parameters, and to produce labels and, in some embodiments, loss reserves and financial reports.

As discussed below with respect to FIGS. 3, 4, and 5; AI platform 104 may be used to train multiple neural network models relating to different granular segments of vehicle operators. For example, AI platform 104 may be used to train a neural network model for use in determining loss reserves related to motorcycle operators. In another embodiment, AI platform 104 may be used to train a neural network model (or other artificial intelligence or machine learning algorithm, program, module, or model) for use in determining optimal loss reserves, a priori, relating to windshield damage claims. The precise manner in which neural networks are created and trained is described below. In some embodiments, large-scale/distributed computing tools (e.g., Apache Hadoop) may be used to implement some of artificial intelligence platform 104.

In the embodiment of FIG. 1, AI platform 104 may include claim analysis unit 120. Claim analysis unit 120 may include speech-to-text unit 122 and image analysis unit 124 which may comprise, respectively, algorithms for converting human speech into text and analyzing images (e.g., extracting information from hotel and rental receipts). In this way, data may comprise audio recordings (e.g., recordings made when a customer telephones a customer service center) that may be converted to text and further used by AI platform 104. In some embodiments, customer behavior represented in data including the accuracy and truthfulness of a customer may be encoded by claim analysis unit 120 and used by AI platform 104 to train and operate neural network models (or other artificial intelligence or machine learning algorithms or models). Claim analysis unit 120 may also include text analysis unit 126, which may include pattern matching unit 128 and natural language processing (NLP) unit 130. In some embodiments, text analysis unit 126 may determine facts regarding claim inputs (e.g., the amount of money paid under a claim). Amounts may be determined in a currency- and inflation-neutral manner, so that claim loss amounts may be directly compared. In some embodiments, text analysis unit 126 may analyze text produced by speech-to-text unit 122 or image analysis unit 124.

In some embodiments, pattern matching unit 128 may search textual claim data loaded into AI platform 104 for specific strings or keywords in text (e.g., “hospital” or “surgery”) which may be indicative of particular types of injury. Such keywords may be associated with a respective occurrence (e.g., the number 5 may indicate that a person sustained five surgeries). NLP unit 130 may be used to identify, for example, entities or objects indicative of risk (e.g., that an injury occurred to a person, and that the person's leg was injured). NLP unit 130 may identify human speech patterns in data, including semantic information relating to entities, such as people, vehicles, homes, and other objects. For example, the location and time of an accident may be identified, as well as a quantity related to an accident (e.g., the number of passengers)

Relevant verbs and objects, as opposed to verbs and objects of lesser relevance, may be determined by the use of a machine learning algorithm analyzing historical claims. :For example, both a driver and a deer may be relevant objects. Verbs indicating collision or injury may be relevant verbs. In some embodiments, text analysis unit 126 may comprise text processing algorithms (e.g., lexers and parsers, regular expressions, etc.) and may emit structured text in a format which may be consumed by other components. For example, text analysis unit 126 may receive output from a trained neural network.

In the embodiment of FIG. 1, AI platform 104 may include a loss classifier 140 to classify, or group, losses. Such classification may use standard clustering techniques used in machine learning, such as AT-means algorithms. In some embodiments, loss classifier 140 may group losses into groups by pre-defined categories (e.g., large/small, personal injury/property, etc.). In other embodiments, classification may determine categories by agglomeration or other known methods. Loss classifier may associate claims with loss category information, and such information may be stored in a loss data 142. Loss classifier 140 may be used to build a predictive model that pertains to a category (e.g., motorcycle operators) as described above.

Loss classifier 140 may analyze a subset of claims in historical data 110. The subset of claims may contain a mixture of severe claims (e.g., those claims in which complications from surgery post-accident resulted in the greatest level of damage, whether quantified by pecuniary loss or the loss of human life, motor function, and/or cognitive function) and non-severe claims (e.g., those claims in which only minor first aid was rendered post-accident). Loss classifier 140 may be trained to categorize claims based upon membership in one or more “severity” categories. Once loss classifier 140 has classified a given claim, its severity may be saved to and/or retrieved from an electronic database, such as loss data 142, or associated with a set of input data 102. Loss classifier 140 severity information may also be passed to other components, such as neural network unit 150. Random forest trees may be used to classify claims, and may be capable of determining which of several criteria or features associated with each claim was paramount in the classifier's decision.

Neural network unit 150 may use an artificial neural network, or simply “neural network.” The neural network may be any suitable type of neural network, including, without limitation, a recurrent neural network, feed-forward neural network, and/or deep learning network. The neural network may include any number (e.g., thousands) of nodes or “neurons” arranged in multiple layers, with each neuron processing one or more inputs to generate a decision or other output. In some embodiments, neural network unit 150 may use other types of artificial intelligence or machine learning algorithms or models, including those discussed elsewhere herein.

In some embodiments, neural network models may be chained together, so that output from one model is piped or transferred into another model as input. For example, loss classifier 140 may, in one embodiment, apply input data 102 to a first neural network model that is trained to categorize claims (e.g., by vehicle type, severity and/or other criteria). The output of this first neural network model may be fed as input to a second neural network model which has been trained to generate loss reserves for claims corresponding to the categories.

As noted, a neural network may include a series of nodes connected by weighted links, and the neural network may be continuously trained from a randomized initial state, using a subset of historical claims as input, until the outputs corresponding to the sum of the weights at each layer in the neural network converge to the particular values assigned to a “truth” data set. A truth data set may contain claims along with correct, or optimal, loss reserving amounts, and may be based upon historical loss reserves of an insurer. For example, the insurer may include a plurality of claims with corresponding loss reserves that resulted in the insurer not overestimating or underestimating the payouts of the plurality of claims. The error of the network may be measured as the difference between the particular values and the weights. Once trained, the neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) may be validated with another subset of data, and its parameters and/or structure adjusted accordingly.

Neural network unit 150 may include training unit 152, and loss reserving unit 154. To train the neural network to identify risk, neural network unit 150 may access electronic claims within historical data 108. Historical data 108 may comprise a corpus of documents comprising many (e.g., millions) of insurance claims which may contain data linking a particular customer or claimant to one or more vehicles, and which may also contain, or be linked to, information pertaining to the customer. In particular, historical data 108 may be analyzed by AI platform 104 to generate claim records 110-1 through 110-n, where n is any positive integer. Each claim 110-1 through 110-n may be processed by training unit 152 to train one or more neural networks to predict loss reserves, including by pre-processing of historical data 108 using input analysis unit 120 as described above. Claim records 110-1 through 110-n may be assigned to a time series by, for example, date parsing or other methods.

Neural network 150 may, from a trained model, identify labels that correspond to specific data, metadata, and/or attributes within input data 102, depending on the embodiment. For example, neural network 150 may be provided with instructions from input analysis unit 120 indicating that one or more particular type of insurance is associated with one or more portions of input data 102.

Neural network 150 (or other artificial intelligence or machine learning algorithm, program, module, or model) may identify one or more insurance types associated with the one or more portions of input data 102 (e.g., bodily injury, property damage, collision coverage, comprehensive coverage, liability insurance, need pay, or personal injury protection (PIP) insurance) and by input analysis unit 120. In one embodiment, the one or more insurance types may be identified by training the neural network 150 (or other artificial intelligence or machine learning algorithm or model) based upon types of peril. For example, the neural network model (or other artificial intelligence or machine learning algorithm or model) may be trained to determine that fire, theft, or vandalism may indicate comprehensive insurance coverage. Insurance types and/or types of peril may be used to categorize claim records for the purpose of training a model. For example, a “vandalism” loss reserve model may be trained using a categorized subset of such data.

In addition, input data 102 may indicate a particular customer and/or vehicle. In that case, loss classifier 140 may look up additional customer and/or vehicle information from customer data 160 and vehicle data 162, respectively. For example, the age of the vehicle operator and/or vehicle type may be obtained. The additional customer and/or vehicle information may be provided to neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) and may be used to analyze and label input data 102 and, ultimately, may be used to train the artificial neural network model (or other artificial intelligence or machine learning algorithm or model). For example, neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) may be used to predict risk based upon inputs obtained from a person applying for an auto insurance policy, or based upon a claim submitted by a person who is a holder of an existing insurance policy. That is, in some embodiments where neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) is trained on claim data, neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) may determine loss reserves based upon raw information unrelated to the claims filing process, or based upon other data obtained during the filing of a claim (e.g., a claim record retrieved from historical data 108).

In one embodiment, the training process may be performed in parallel, and training unit 152 may analyze all or a subset of claims 110-1 through 110-n. Specifically, training unit 152. may train a neural network (or other artificial intelligence or machine learning algorithm or model) to predict loss reserves in claim records 110-1 through 110-n. As noted, AI platform 104 may analyze input data 102 to arrange the historical claims into claim records 110-1 through 110-n, where n is any positive integer. Claim records 110-1 through 110-n may be organized in a flat list structure, in a hierarchical tree structure, or by means of any other suitable data structure. For example, the claim records may be arranged in a tree wherein each branch of the tree is representative of one or more customer.

There, each of claim records 110-1 through 110-n may represent a single non-branching claim, or may represent multiple claim records arranged in a group or tree. Further, claim records 110-1 through 110-n may comprise links to customers and vehicles whose corresponding data is located elsewhere. In this way, one or more claims may be associated with one or more customers and one or more vehicles via one-to-many and/or many-to-one relationships. Risk factors may be data indicative of a particular risk or risks associated with a given claim, customer, and/or vehicle. The status of claim records may be completely settled or in various stages of settlement.

As used herein, the term “claim” or “vehicle claim” generally refers to an electronic document, record, or file, that represents an insurance claim (e.g., an automobile insurance claim) submitted by a policy holder of an insurance company. Herein, “claim data” or “historical data” generally refers to data directly entered by the customer or insurance company including, without limitation, free-form text notes, photographs, audio recordings, written records, receipts (e.g., hotel and rental car), and other information including data from legacy, including pre-Internet (e.g., paper file), systems. Notes from claim adjusters and attorneys may also be included.

In one embodiment, claim data may include claim metadata or external data, which generally refers to data pertaining to the claim that may be derived from claim data or which otherwise describes, or is related to, the claim but may not be part of the electronic claim record. Claim metadata may have been generated directly by a developer of the environment 100, for example, or may have been automatically generated as a direct product or byproduct of a process carried out in environment 100. For example, claim metadata may include a field indicating whether a claim was settled or not settled, and amount of any payouts, and the identity of corresponding payees. Another example of claim metadata is the geographic location in which a claim is submitted, which may be obtained via a global positioning system (GPS) sensor in a device used by the person or entity submitting the claim.

Yet another example of claim metadata includes a category of the claim type (e.g., collision, liability, uninsured or underinsured motorist, etc.). For example, a single claim in historical data 108 may be associated with a married couple, and may include the name, address, and other demographic information relating to the couple. Additionally, the claim may be associated with multiple vehicles owned or leased by the couple, and may contain information pertaining to those vehicles including without limitation, the vehicles' make, model, year, condition, mileage, etc. The claim may include a plurality of claim data and claim metadata, including metadata indicating a relationship or linkage to other claims in historical claim data 108. In this way, neural network unit 150 may produce a neural network that has been trained to associate the presence of certain input parameters with higher or lower risk levels. A specific example of a claim is discussed with respect to FIG. 5, below.

Once the neural network (or other artificial intelligence or machine learning algorithm or model) has been trained, loss reserving unit 154 may analyze, combine, and/or validate prediction information from training unit 152. For example, loss reserving unit 154 may check whether predicted loss reserving amounts or percentages are within a given range (e.g., positive or negative). Loss reserving unit 154 may use pre-determined parameters retrieved from loss data 142 or another electronic database in conjunction with training unit 152 output. A trained neural network (or other artificial intelligence or machine learning algorithm or model) may, based upon analyzing claim data, output a loss reserving amount that is analyzed by loss reserving unit 154. Multiple loss reserving outputs, or estimates, may be aggregated by loss reserve aggregation platform 106.

AI platform 104 may further include customer data 160 and vehicle data 162, which loss classifier 140 may use to provide useful input parameters to neural network unit 150 (or other artificial intelligence or machine learning algorithm or model). Customer data 160 may be an integral part of AI platform 104, or may be located separately from AI platform 104. In some embodiments, customer data 160 or vehicle data 162 may be provided to AI platform 104 via separate means (e.g., via an API call), and may he accessed by other units or components of environment 100. Either may he provided by a third-party service. For example, in some embodiments, a trained neural network (or other artificial intelligence or machine learning algorithm or model) may require a vehicle type as a parameter. Based solely on a claim input from claim 110-1 through 110-n, loss classifier 140 may look up the vehicle type from vehicle data 162 as the claim is being passed to neural network unit 150. It should be appreciated that many sources of additional data may be used as inputs to train and operate artificial neural network models. The neural network modules may include other types of artificial intelligence or machine learning algorithms, models, and/or modules.

Vehicle data 162 may be a database comprising information describing vehicle makes and models, including information about model years and model types (e.g., model edition information, engine type, any upgrade packages, etc.). Vehicle data 162 may indicate whether certain make and model year vehicles are equipped with safety features (e.g., lane departure warnings). The vehicle data 162 may also relate to autonomous or semi-autonomous vehicle features or technologies of the vehicle, and/or sensors, software, and electronic components that direct the autonomous or semi-autonomous vehicle features or technologies. For example, the information describing vehicle makes and models may specify, at the model type and/or model year level, the degree to which a vehicle is equipped with autonomous and/or semiautonomous capabilities, and/or the degree to which a vehicle may be adequately retrofitted to accept such capabilities.

In some embodiments, the failure of an autonomous or semi-autonomous vehicle system may be discovered via the neural network (or other artificial intelligence or machine learning algorithm or model) analysis described above. An autonomous vehicle system failure such as “lane departure malfunction” may be used by loss reserving unit 154. In one embodiment, a user's completing a repair within a pre-set window of time, or one computed based upon loss probability, may cause the user to receive advantageous pricing as regards an existing or new insurance policy.

Vehicle capabilities may be listed individually. For example, a database table may be constructed within the electronic database which specifies whether a vehicle has a steering wheel, gas pedal, and/or brake pedal. In addition, or alternately, the database table may classify a vehicle as belonging to a particular category/taxonomic classification of autonomous or semi-autonomous vehicles as measured by a vehicle automation ratings system (e.g., by Society of Automotive Engineers (S.A.E.) automation ratings system), by which the set of features may be automatically determined, by reference to the standards established by the vehicle automation ratings system. In some embodiments, autonomous and/or semi-autonomous capabilities known to be installed in a vehicle, or which may be determined based upon a known vehicle classification/adherence to a standard, may be provided as input to an artificial neural network or other algorithm used to mitigate loss and/or handle claims. Vehicle owners may be advised (e.g., via a message displayed in a display such as display 224) that moving from one level of vehicle autonomy to another, may improve aggregate risk and decrease premiums.

In some embodiments, users who have been involved in an accident recently (e.g., within one month) may be incentivized to mitigate further injury by utilizing autonomous driving features. Such incentives may be communicated to users after a trained neural network analyzes a claim as described above.

The types of autonomous or semi-autonomous vehicle-related functionality or technology that may be used with the present embodiments to replace human driver actions may include and/or be related to the following types of functionality: (a) fully autonomous (driverless); (b) limited driver control; (c) vehicle-to-vehicle (V2V) wireless communication; (d) vehicle-to-infrastructure (and/or vice versa), and/or vehicle-to-device (such as mobile device or smart vehicle controller) wireless communication; (e) automatic or semi-automatic steering; (f) automatic or semi-automatic acceleration; (g) automatic or semi-automatic braking; (h) automatic or semi-automatic blind spot monitoring; (i) automatic or semi-automatic collision warning; (j) adaptive cruise control; (k) automatic or semi-automatic parking/parking assistance; (l) automatic or semi-automatic collision preparation (windows roll up, seat adjusts upright, brakes pre-charge, etc.); (m) driver acuity/alertness monitoring; (n) pedestrian detection; (o) autonomous or semi-autonomous backup systems; (p) road mapping systems; (q) software security and anti-hacking measures; (r) theft prevention/automatic return; (s) automatic or semi-automatic driving without occupants; and/or other functionality.

All of the information pertaining to a claim, including customer and vehicle information, may be provided to neural network unit 150 for training a model to determine loss reserving amounts. In some embodiments, loss reserve overrides that are stored separately from AI platform 104, may be used to force human oversight of. For example, the methods and systems herein may contain instructions which, when executed, cause any set of claim being analyzed by a neural network (or other artificial intelligence or machine learning algorithm or model) that cause a loss reserving amount of over $1 m to be predicted to require human review and confirmation. Over time, as the model is trained, such overrides may be removed. In other embodiments, the models may be completely automated and unattended.

It should also be appreciated that the methods and techniques described herein may not be applied to seek profit in an insurance marketplace. Rather, the methods and techniques may be used to more fairly and equitably allocate risk among customers in a way that is revenue-neutral, yet which strives for fairness to all market participants, and may only be used on an opt-in basis. In one embodiment, a claim may be related to the operation of a vehicle. In other words, the claim may relate to physical injury sustained by a driver and/or passenger, damage to the vehicle being driven by the vehicle operator, another vehicle, or other persons/property. The models trained using the methods and systems herein may be trained incrementally, so that when new claims are settled, they are used to improve an existing model without completely retraining the model on all data.

The methods and systems described herein may help risk-averse customers to lower their insurance premiums by taking affirmative steps to mitigate risk of loss before, during, and after the filing of a claim. The methods and systems may also allow customers to interact with claims handling in a transparent, streamlined, and scalable fashion. All of the benefits provided by the methods and systems described herein may be realized much more quickly than traditional modeling approaches.

Exemplary Training Model System

With reference to FIG. 2, a high-level block diagram of vehicle insurance loss reserving model training system 200 is illustrated that may implement communications between a client device 202 and a server device 204 via network 206 to provide vehicle insurance loss mitigation and/or claims handling. FIG. 2 may correspond to one embodiment of environment 100 of FIG. 1, and also includes various user/client-side components. For simplicity, client device 202 is referred to herein as client 202, and server device 204 is referred to herein as server 204, but either device may be any suitable computing device (e.g., a laptop, smart phone, tablet, server, wearable device, etc.). Server 204 may host services relating to neural network training and operation, and may be communicatively coupled to client 202 via network 206. In general, -training the neural network model may include establishing a network architecture, or topology, and adding layers that may be associated with one or more activation functions (e.g., a rectified linear unit, softmax, etc.), loss functions and/or optimization functions. Multiple different types of artificial neural networks may be employed, including without limitation, recurrent neural networks, convolutional neural networks, and deep learning neural networks. Data sets used to train the artificial neural network(s) may be divided into training, validation, and testing subsets; these subsets may be encoded in an N-dimensional tensor, array, matrix, or other suitable data structures. Training may be performed by iteratively training the network using labeled training samples. Training of the artificial neural network may produce byproduct weights, or parameters which may be initialized to random values. The weights may be modified as the network is iteratively trained, by using one of several gradient descent algorithms, to reduce loss and to cause the values output by the network to converge to expected, or “learned”, values, in an embodiment, a regression neural network may be selected which lacks an activation function, wherein input data may be normalized by mean centering, to determine loss and quantify the accuracy of outputs. Such normalization may use a mean squared error loss function and mean absolute error. The artificial neural network model may be validated and cross-validated using standard techniques such as hold-out, K-fold, etc. In some embodiments, multiple artificial neural networks may be separately trained and operated, and/or separately trained and operated in conjunction.

Although only one client device is depicted in FIG. 2, it should be understood that any number of client devices 202 may be supported. Client device 202 may include a memory 208 and a processor 210 for storing and executing, respectively, a module 212. While referred to in the singular, processor 210 may include any suitable number of processors of one or more types (e.g., one or more CPUs, graphics processing units (GPUs), cores, etc.). Similarly, memory 208 may include one or more persistent memories (e.g., a hard drive and/or solid state memory).

Module 212, stored in memory 208 as a set of computer-readable instructions, may be related to an loss reserve client 216 which, when executed by the processor 210, may cause input data to be stored in memory 208 or data to be transferred to/from server 204 via network 206. The data stored in memory 208 may correspond to, for example, raw data retrieved from input data 102. Loss reserve client 216 may be implemented as web page (e.g., HTML, JavaScript, CSS, etc.) and/or as a mobile application for use on a standard mobile computing platform.

Loss reserve client 216 may store information in memory 208, including the instructions required for its execution. While the user is using loss reserve client 216, scripts and other instructions comprising loss reserve client 216 may be represented in memory 208 as a web or mobile application. The input data collected by loss reserve client 216 may be stored in memory 208 and/or transmitted to server device 204 by network interface 214 via network 206, where the input data may be processed as described above to determine a series of risk indications and/or a risk level. In one embodiment, loss reserve client 216 may be data used to train a model (e.g., scanned claim data).

Client device 202 may also include GPS sensor 218, an image sensor 220, user input device 222 (e.g., a keyboard, mouse, touchpad, and/or other input peripheral device), and display interface 224 (e.g., an LED screen). User input device 222 may include components that are integral to client device 202, and/or exterior components that are communicatively coupled to client device 202, to enable client device 202 to accept inputs from the user. Display 224 may be either integral or external to client device 202, and may employ any suitable display technology. In some embodiments, input device 222 and display 224 are integrated, such as in a touchscreen display. Execution of the module 212 may further cause the processor 210 to associate device data collected from client 202 such as a time, date, and/or sensor data (e.g., a camera for photographic or video data) with vehicle and/or customer data, such as data retrieved from customer data 160 and vehicle data 162, respectively.

In some embodiments, client 202 may receive data from loss data 142 and loss reserve aggregation platform 106. Such data, loss labels and plan data, may be presented to a user of client 202 by a display interface 224. Aggregation data may include gross and net amounts related to categories of loss reserves, in some embodiments. Aggregation data may include an acceptability indicator, demonstrative of whether aggregate amounts are more or less than an acceptable dollar amount or multiplier. An action may be taken if an acceptability indicator demonstrates an amount beyond an acceptable range (e.g., a warning message emitted or an email sent). In this way, the loss reserve aggregation platform 106 may provide an insurer with a view of loss reserves at a global level (e.g., across a division, such as automotive, or with respect to an organizational unit or subsidiary of a company) or at a level wherein the granularity is configurable by the insurer all the way down to the individual customer level.

Execution of the module 212 may further cause the processor 210 of the client 202 to communicate with the processor 250 of the server 204 via network interface 214 and network 206. As an example, an application related to module 212, such as loss reserve client 216, may, when executed by processor 210, cause a user interface to be displayed to a user of client device 202 via display interface 224. The application may include graphical user input (GUI) components for acquiring data (e.g., photographs) from image sensor 220, GPS coordinate data from GPS sensor 218, and textual user input from user input device(s) 222.

The processor 210 may transmit the aforementioned acquired data to server 204, and processor 250 may pass the acquired data to an artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model), which may accept the acquired data and perform a computation (e.g., training of the model, or application of the acquired data to a trained artificial neural network model (or other artificial intelligence or machine learning algorithm or model) to obtain a result). With specific reference to FIG. 1, the data acquired by client 202 may be transmitted via network 206 to a server implementing AI platform 104, and may be processed by input analysis unit 120 before being applied to a trained neural network (or other artificial intelligence or machine learning algorithm or model) by loss classifier 140.

As described with respect to FIG. 1, the processing of input from client 202 may include associating customer data 160 and vehicle data 162 with the acquired data. The output of the neural network (or other artificial intelligence or machine learning algorithm or model) may be transmitted, by a loss classifier corresponding to loss classifier 140 in server 204, back to client 202 for display (e.g., in display 224) and/or for further processing.

Network interface 214 may be configured to facilitate communications between client 202 and server 204 via any hardwired or wireless communication network, including network 206 which may be a single communication network, or may include multiple communication networks of one or more types (e.g., one or more wired and/or wireless local area networks (LANs), and/or one or more wired and/or wireless wide area networks (WANs) such as the Internet). Client 202 may cause insurance risk/ loss/ claim related data and/or metadata to be stored in server 204 memory 252 and/or a remote insurance related database such as customer data 160.

Server 204 may include a processor 250 and a memory 252 for executing and storing, respectively, a module 254. Module 254, stored in memory 252 as a set of computer-readable instructions, may facilitate applications related to loss reserving and financial reporting including data storage and retrieval (e.g., data and claim metadata, and insurance policy application data). For example, module 254 may include input analysis application 260, loss reserving application 262, and neural network training application 264, in one embodiment. Module 254 may be responsible for interpreting output from trained neural network models (or other types of artificial intelligence or machine learning algorithms or models), and for generating loss reserving information, in some embodiments.

Input analysis application 260 may correspond to input analysis unit 120 of environment 100 of FIG. 1. Loss reserving application 262 may correspond to loss reserving unit 154 of FIG. 1, and neural network training application 264 may correspond to neural network unit 150 of environment 100 of FIG. 1. Module 254 and the applications contained therein may include instructions which, when executed by processor 250, cause server 204 to receive and/or retrieve input data from (e.g., raw data and/or an electronic claim) from client device 202. In one embodiment, input analysis application 260 may process the data from client 202, such as by matching patterns, converting raw text to structured text via natural language processing, by extracting content from images, by converting speech to text, and so on.

In another embodiment, client device 202 may he used by an employee of the insurer to view results produced by loss reserving application 262. For example, loss reserving unit 262 may display/interpret results of a trained neural network (or other artificial intelligence or machine learning algorithm or model). In some cases, loss reserving unit 262 may continuously interpret results produced by a trained neural network (e.g., hourly, weekly, or monthly). As time passes and the neural network receives additional claim data, the predicted loss reserves may be updated. In one embodiment, an increase in loss reserves predicted by a neural network model may cause a withdrawal or transfer of funds into a bank account or trust specifically created for the purpose of holding loss reserving funds.

Throughout the aforementioned processing, processor 250 may read data from, and write data to, a location of memory 252. and/or to one or more databases associated with server 204. For example, instructions included in module 254 may cause processor 250 to read data from an historical data 270, which may be communicatively coupled to server device 204, either directly or via communication network 206. Historical data 270 may correspond to historical data 108, and processor 250 may contain instructions specifying analysis of a series of electronic claim documents from historical data 270, as described above with respect to claims 110-1 through 110-n of historical data 108 in FIG. 1.

Processor 250 may query customer data 272. and vehicle 274 for data related to respective electronic claim documents and raw data, as described with respect to FIG. 1. In one embodiment customer data 272 and vehicle data 274 correspond, respectively, customer data 160 and 162. In another embodiment, customer data 272 and/or vehicle data 274 may not be integral to server 204. Module 254 may also facilitate communication between client 202. and server 204 via network interface 256 and network 206, in addition to other instructions and functions.

Although only a single server 204 is depicted in FIG. 2, it should be appreciated that it may be advantageous in some embodiments to provision multiple servers for the deployment and functioning of AI system 102. For example, the pattern matching unit 12.8 and natural language processing unit 130 of input analysis unit 120 may require CPU-intensive processing. Therefore, deploying additional hardware may provide additional execution speed. Each of historical data 270, customer data 272, vehicle data 274, and risk indication data 276 may be geographically distributed.

While the databases depicted in FIG. 2 are shown as being communicatively coupled to server 204, it should be understood that historical claim data 270, for example, may be located within separate remote servers or any other suitable computing devices communicatively coupled to server 204. Distributed database techniques (e.g., sharding and/or partitioning) may be used to distribute data. In one embodiment, a free or open source software framework such as Apache Hadoop® may be used to distribute data and run applications (e.g., loss reserving application 262). It should also be appreciated that different security needs, including those mandated by laws and government regulations, may in some cases affect the embodiment chosen, and configuration of services and components.

In a manner similar to that discussed above in connection with FIG. 1, historical claims from historical claim data 270 may be ingested by server 204 and used by neural network training application 264 to train an artificial neural network (or other artificial intelligence or machine learning algorithm or model). In one embodiment, a claim may be classified according to a muitilabel, multiclass scheme. For example, an algorithm may be trained using a portion of historical claims as input that are pre-labeled with a set of labels. The set of labels may comprise any information found in a claim before processing (e.g., whether settled, and a payout amount, if any) and after processing by input analysis unit 120. A set of several thousand, or even millions, of claims may be associated with such informational labels, and a percentage (e.g., 80%) may be used to train a neural network (or other artificial intelligence or machine learning algorithm or model). For example, a recurrent neural network may be created that uses a number of hidden layers and has as its last layer a densely connected network in which all neurons are interconnected. Additional layers or “chains” may be formed, in which models of differing network architectures are coupled to the recurrent neural network. The output of the chained network may be a set of labels to which the claim is predicted to belong (e.g., MOTORCYCLE, PASSENGER-CAR, etc).

Then, when module 254 processes input from client 202, the data output by the neural network(s) (e.g., data indicating labels, loss reserving amounts, weights, etc.) may be passed to loss reserving application 262 for analysis/display. As discussed, loss reserving application 262 may take additional actions based upon the output of the trained model(s).

It should be appreciated that the client/server configuration depicted and described with respect to FIG. 2 is but one possible embodiment. In some cases, a client device such as client 202. may not be used. In that case, input data may be entered programmatically, or manually directly into device 204. A computer program or human may perform such data entry. In that case, device may contain additional or fewer components, including input device(s) and/or display device(s).

In one embodiment, a client device 202 may be an integral device to a vehicle of a user (not depicted), or may be communicatively coupled to a network communication device of a vehicle. The vehicle may be an autonomous or semi-autonomous vehicle, and loss reserve client 216 may include instructions which, when executed, may collect information pertaining to the autonomous capabilities of the vehicle. For example, the loss reserve client 216 may periodically receive/retrieve the status of individual autonomous vehicle components (e.g, the engagement/disengagement of a collision avoidance mechanism) and/or whether a particular dynamic driving system is active or disabled (e.g., by intentional interference or accidental damage).

The status of autonomous (and in some embodiments, semi-autonomous) systems may be determined by polling input devices, such as input device 222, or by other methods (e.g., by receiving streamed status information, or by retrieving cached values). Such status information may be used as training data for an artificial neural network (or other artificial intelligence or machine learning algorithm or model) (e.g., by neural network training application 264), and/or may be used as input to a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) to determine the risk represented by a vehicle and/or a driver. Vehicle risk and driver risk may be independently calculated. For example, an SAE Level 3 autonomous vehicle may be associated with a baseline risk level, and a user's risk may be factored in to the baseline level risk. Multiple variables e.g., vehicle category, driver age, autonomous features, etc.) may be used to make a single prediction of loss reserves.

As noted, the risk factors or labels determined by trained neural networks (or other artificial intelligence or machine learning algorithms or models) analyzing historical claim data may appear counter-intuitive or unrelated to the optimal loss reserve level. For example, in a vehicle wherein a dynamic driving system includes functionality to take control away from the automated system, a neural network (or other artificial intelligence or machine learning algorithm or model) may predict high risk wherein the instances of revocation of control from the automated system is low. This may indicate an over-reliance on the automated system by a vehicle operator. It may also be the case that revocation of control may indicate high risk with respect to some vehicle operators, and lower risk with respect to others.

Artificial neural network models (or other artificial intelligence or machine learning algorithms or models) may be trained to output compound labels (e.g., AUTONOMOUS-RURAL). Once autonomous vehicle information is determined, it may be transmitted to a remote computing system, such as AI platform 104, or server device 204 for further analysis. Input analysis application 260 may format and/or store autonomous vehicle information in a database, such as vehicle data 274, and/or a trained neural network (or other artificial intelligence or machine learning algorithm or model) may immediately (or at a later time) process the autonomous vehicle information to determine loss reserving amounts, whether individual or aggregated.

The loss reserving and financial reporting information may be associated with one or both of a vehicle and a vehicle operator, by, respectively storage in vehicle data 274 and customer data 272. In some embodiments, the set of loss reserving information may be stored in an electronic database such as loss data 142. In some embodiments, the set of loss reserving and financial reporting information may be provided to an additional application, such as loss reserve aggregation platform 106, or an application executing in module 212. As noted, once a set of loss reserving information is identified, the set may be used to compute an aggregate, which may be used for many purposes, such as underwriting insurance policies, adjusting capitalization, forecasting profit/loss, etc.

In one embodiment, an automated control system may perform dynamic vehicle control, which may include instructions to operate the vehicle, including without limitation, real-time functions, trip generation, steeling control, acceleration and deceleration, environmental monitoring, and instructions for operating various vehicle components (e.g., headlights, turn signals, traction control, etc.). The automated control system may perform dynamic vehicle control for a period of time (e.g., hours or days) with respect to the vehicle, during which time an application executing in module 212 may collect telematics data. Telematics data may include such data as GPS information, vehicle location, braking, speed, acceleration, cornering, movement, status, orientation, position, behavior, mobile device, and/or other types of data; and may be determined using a combination of sensors and computing/ storage devices. For example, loss reserve client 216 may determine the vehicle's position by reading data from GPS device 218. Other sensors may provide information regarding the vehicle's speed, acceleration, instrumentation, and path.

Telematics data may be periodically sampled, or retrieved on a continuous basis. Telematics data may be transmitted in real-time from a wireless networking transceiver e.g., network interface 214) via a network (e.g., network 206) to a communicatively coupled server (e.g., server device 204). In some embodiments, telematics data may be cached in a memory 208 and transmitted to server 204 at a later time, or processed in situ. Telematics data may be provided as input to a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model). Each individual data type within telematics data may be referred to as a “telematics attribute.” For example, “speed” may be a telematics attribute.

In one embodiment, the real-time use of autonomous vehicle features and telematics data may be used to train a neural network (or other artificial intelligence or machine learning algorithm or model) to predict loss reserving amounts. For example, the percentage of vehicles equipped with autonomous vehicle features in which the drivers do not take sharp corners may be directly correlated with lower loss reserving requirements. As noted above, such models may be continuously trained using data input from client device 202. Client device 202 may be located inside/integral to a vehicle, according to some embodiments.

A set of periodic telematics data, (e.g., a month's worth of telematics data) may be stored in association with a user's account in an electronic database coupled to client device 202. and/or server device 204. The electronic database may include physical and/or software anti-tampering measures intended to prevent unauthorized alteration of modification of the telematics data. For example, telematics data stored in a device onboard the vehicle may be encrypted in a server computing device using a secret key that is not stored in the vehicle. As noted above, customer data 272 may be associated with data corresponding to one or more vehicle in vehicle data 274.

In some embodiments, a neural network may be trained to automatically provide financial reporting. The provision of automatic financial reporting may be based on output of a first neural network establishing loss reserve amounts. Financial reporting may be triggered in response to a set of inputs or a learned value. For example, in an embodiment, financial reporting may be performed if a trained algorithm encounters a series of inputs that increase loss reserving beyond a preconfigured amount or cause another threshold value to be exceeded. In other embodiments, financial reports may include aggregations of loss reserve amounts. For example, the output of a loss reserving neural network may be collected over a period of time (e.g., hourly, quarterly, etc.). A financial report may be generated which includes a summary and/or aggregation of the loss reserve outputs, in textual and/or graphical format.

An artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be trained in neural network training application 264 which includes a plurality of input layers for customer data, a plurality of input layers for vehicle data, and a plurality of layers for telematics data, wherein the vehicle data, customer data, and telematics data relate to the operation of a vehicle by a vehicle operator. For example, the vehicle data may include the make and model of the vehicle, as well as a manifest of the autonomous or dynamic driving capabilities standardly supported by the vehicle, including a status indication of each respective capability. The customer data may include demographic or other customer data as described herein, and the telematics data may include the information as described above.

In one embodiment, telematics data may include indications of user driving such as braking, cornering, speed, and acceleration. The neural network (or other artificial intelligence or machine learning algorithm or model) may associate such behaviors with higher loss reserves. The neural network (or other artificial intelligence or machine learning algorithm or model) may learn to weight such activities higher due to association with factors in other data sets (e.g., higher claim payouts, and vehicles having higher top speeds and/or lacking automated driving capabilities).

The artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be trained to output loss reserving information with respect to customers by analyzing historical claims data in addition to the telematics data, vehicle data, and/or claims data. For example, the artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be trained using claims data filed customers in a geographic area who are between the ages of 16 and 25, wherein the vehicle subject to the claim is a pickup truck, and wherein the pickup truck includes partial driving automation (e.g., minimally, lane departure warnings). Such a subset of claims may be identified by querying the electronic databases described above, or by any other suitable method.

It should be appreciated that the ability to create models that are able to calculate loss reserves for an arbitrary set of customers may be a very valuable tool, and may have applications beyond merely setting loss reserves. It should be appreciated that the foregoing example is simplified for expository purposes, and that more complex training scenarios are envisioned. Although some scenarios may include a trained neural network executing in module 254, it may be possible to package the trained neural network for distribution to a client 202 (i.e., the trained neural network (or other artificial intelligence or machine learning algorithm or model) may be operated on the client 202 without the use of a server 204),

In operation, the user of client device 202, by operating input device 222 and viewing display 224, may open loss reserve client 216, which depending on the embodiment, may allow the user to enter information. The user may be an employee of a company controlling M platform 104 or a customer or end user of the company. For example, loss reserve client 216 may walk the user through the steps of training a loss reserving neural network (or other artificial intelligence or machine learning algorithm or model) using a specific subset of training data, and also operating the trained model, as described with respect to FIG. 11.

Before the user can fully access loss reserve client 216, the user may be required to authenticate (e.g., enter a valid username and password). The user may then utilize loss reserve client 216. Module 212 may contain instructions that identify the user and cause loss reserve client 216 to present a particular set of questions or prompts for input to the user, based upon any information loss reserve client 216 collects, including without limitation information about the user or any vehicle. Further, module 212 may identify a subset of historical data 270 to be used in training a neural network (or other artificial intelligence or machine learning algorithm or model), and/or may indicate to server device 204 that the use of a particular neural network (or other artificial intelligence or machine learning) model or models is appropriate.

In some embodiments, location data from client device 202 may be used by a neural network (or other artificial intelligence or machine learning algorithm or model) to label risk, and labels may be linked, in that a first label implies a second label. As noted above, location may be provided to one or more neural networks (or other artificial intelligence or machine learning algorithms or models) in the AI platform to generate labels and determine risk. For example, the zip code of a vehicle operator, whether provided via GPS or entered manually by a user, may cause the neural network (or other artificial intelligence or machine learning algorithm or model) to generate a label applicable to the vehicle operator such as RURAL, SUBURBAN, or URBAN. Such qualifications may be used in the calculation of optimal loss reserve estimations, and may be weighted accordingly. For example, the neural network (or other artificial intelligence or machine learning algorithm or model) may assign a higher severity score to the RURAL label, due to the fact that the vehicle operator recently underwent surgery and should not be driving longer distances. The generation of a RURAL label may be accompanied by additional labels such as COLLISION. Alternatively, or in addition, the collision label weight may be increased along with the addition of the RURAL label.

Another label, such as LONG-TRIP, to reflect that the vehicle operator drives longer trips than other drivers, on average, may be associated with vehicle operators who the neural network (or other artificial intelligence or machine learning algorithm or model) labels as RURAL. In some embodiments, label generation may be based upon seasonal information, in whole or in part. For example, the neural network (or other artificial intelligence or machine learning algorithm or model) may generate labels, and/or adjust label weights based upon location provided in input data. It should be appreciated that the quick and automatic generation of labels is a benefit of the methods and systems disclosed herein, and that some of the associations may appear counter-intuitive when analyzing large data sets.

All of the information collected by loss reserve client 216 may be associated with a session identification number so that it may be referenced as a whole. Server 204 may process the information as it arrives, and thus may process information collected by loss reserve client 216. Once information sufficient to process the user's request has been collected, server 204 may pass all of the processed information (e.g., from input analysis application) to loss reserving application 262, which may apply the information to the trained neural network model (or other artificial intelligence or machine learning algorithm or model). While the loss reserve calculation is ongoing, client device 202 may display an indication to that effect.

When the loss reserving estimate is available, an indication of completeness may be transmitted to client 202 and displayed to user, for example via display 224. Missing information may cause the model to abort with an error. In one embodiment, the settlement of a claim may trigger an immediate update of one or more neural network models (or other artificial intelligence or machine learning algorithms or models) included in the AI platform. For example, the settlement of a claim involving personal injury that occurs on a boat may trigger updates to a set of personal injury neural network models (or other artificial intelligence or machine learning algorithms or models) pertaining to boat insurance, or to a monolithic model.

In addition, or alternatively, as new claims are filed and processed, new labels may be dynamically generated, based upon claim mitigation or loss information identified and generated during the training process. In some embodiments, a human reviewer or team of reviewers may be responsible for approving the generated labels and any associated weightings before they are used. For example, claims may be labeled with settlement amounts, as well as the amount of time that the claim remained unsettled, wherein such time is normalized across all claims (e.g., represented as seconds). Both the dollar amount and timing information may be used to train a. neural network (or other artificial intelligence or machine learning algorithm or model), such that the loss reserving prediction may include both a dollar amount as well as an amount of time that the claim may remain unsettled.

In some embodiments, AI platform 104 may be trained and/or updated to provide one or more dynamic insurance rating models which may be provided to, for example, a governmental agency. As discussed above, models are historically difficult to update and updates may be performed on a yearly basis. Using the techniques described herein, models may be dynamically updated in real-time, or on a shorter schedule (e.g., weekly) based upon new claim data.

While FIG. 2 depicts a particular embodiment, the various components of environment 100 may interoperate in a manner that is different from that described above, and/or the environment 100 may include additional components not shown in FIG. 2. For example, an additional server/platform may act as an interface between client device 202 and server device 204, and may perform various operations associated with providing the loss reserving and financial reporting operations of server 204 to client device 202 and/or other servers.

Exemplary Artificial Neural Network

FIG. 3 depicts an exemplary artificial neural network 300 which may be trained by neural network unit 150 of FIG. 2 or neural network training application 264 of FIG. 2, according to one embodiment and scenario. The example neural network 300 may include layers of neurons, including input layer 302, one or more hidden layers 304-1 through 304-n, and output layer 306. Each layer comprising neural network 300 may include any number of neurons—i.e., q and r may be any positive integers. It should be understood that neural networks may be used to achieve the methods and systems described herein that are of a different structure and configuration than those depicted in FIG. 3.

Input layer 302 may receive different input data. For example, input layer 302 may include a first input a1 which represents an insurance type (e.g., collision), a second input a2 representing patterns identified in input data, a third input a3 representing a vehicle make, a fourth input a4 representing a vehicle model, a fifth input a5 representing whether a claim was paid or not paid, a sixth input a6 representing an inflation-adjusted dollar amount disbursed under a claim, and so on. Input layer 302 may comprise thousands or more inputs. In some embodiments, the number of elements used by neural network 300 may change during the training process, and some neurons may be bypassed or ignored if, for example, during execution of the neural network, they are determined to be of less relevance.

Each neuron in hidden layer(s) 304-1 through 304-n may process one or more inputs from input layer 302, and/or one or more outputs from a previous one of the hidden layers, to generate a decision or other output. Output layer 306 may include one or more outputs each indicating a label, confidence factor, and/or weight describing one or more inputs. A label may indicate the presence (ACCIDENT, DEER) or absence (DROUGHT) of a condition. In some embodiments, however, outputs of neural network 300 may be obtained from a hidden layer 304-1 through 304-n in addition to, or in place of, output(s) from output layer(s) 306.

In some embodiments, each layer may have a discrete, recognizable, function with respect to input data. For example, if n=3, a first layer may analyze one dimension of inputs, a second layer a second dimension, and the final layer a third dimension of the inputs, where all dimensions are analyzing a distinct and unrelated aspect of the input data. For example, the dimensions may correspond to aspects of a vehicle operator considered strongly determinative, then those that are considered of intermediate importance, and finally those that are of less relevance.

In other embodiments, the layers may not be clearly delineated in terms of the functionality they respectively perform. For example, two or more of hidden layers 304-1 through 304-n may share decisions relating to labeling, with no single layer making an independent decision as to labeling.

In some embodiments, neural network 300 may be constituted by a recurrent neural network, wherein the calculation performed at each neuron is dependent upon a previous calculation. It should be appreciated that recurrent neural networks may be more useful in performing certain tasks, such as automatic labeling of images. Therefore, in one embodiment, a recurrent neural network may be trained with respect to a specific piece of functionality with respect to environment 100 of FIG. 1. For example, in one embodiment, a recurrent neural network may be trained and utilized as part of image processing unit 124 to automatically label images.

FIG. 4 depicts an example neuron 400 that may correspond to the neuron labeled as “1,1” in hidden layer 304-1 of FIG. 3, according to one embodiment. Each of the inputs to neuron 400 (e g the inputs comprising input layer 302) may be weighted, such that input a1 through ap corresponds to weights w1 through wp, as determined during the training process of neural network 300.

In some embodiments, some inputs may lack an explicit weight, or may be associated with a weight below a relevant threshold. The weights may be applied to a function ii, which may be a summation and may produce a value z1 which may be input to a function 420, labeled as f1,1(z1). The function 420 may be any suitable linear or non-linear, or sigmoid, function. As depicted in FIG. 4, the function 420 may produce multiple outputs, which may be provided to neuron(s) of a subsequent layer, or used directly as an output of neural network 300. For example, the outputs may correspond to index values in a dictionary of labels, or may be calculated values used as inputs to subsequent functions.

It should be appreciated that the structure and function of the neural network 300 and neuron 400 depicted are for illustration purposes only, and that other suitable configurations may exist. For example, the output of any given neuron may depend not only on values determined by past neurons, but also future neurons.

In some embodiments, a percentage of the data set used to train the neural network (or other artificial intelligence or machine learning algorithm or model) may be held back as testing data until after the neural network (or other artificial intelligence or machine learning algorithm or model) is trained using the balance of the data set. In embodiments wherein the neural network involves a time series or other temporally-ordered data, all elements composing the testing data set may be posterior of those composing training data set in time.

Exemplary Processing of a Claim

The specific manner in which the one or more neural networks employ machine learning to label and/or quantify risk may differ depending on the content and arrangement of training documents within the historical data (e.g., historical data 108 of FIG. 1 and historical data 270 of FIG. 2) and the input data provided by customers or users of the AI platform (e.g., input data 102 of FIG. 1 and the data collected by loss reserve client 216 of FIG. 2), as well as the data that is joined to the historical data and input data, such as customer data 160 of FIG. 1 and customer data 272 of FIG. 2, and customer data 160 of FIG. 1 and vehicle data 274 of FIG. 2.

The nature and characteristics of the data to be predicted may also necessitate changes to the structure of the neural network (e.g., number of layers, number of input parameters, number of output parameters, number of neurons per layer), as well as the determination of whether or not to chain or stack multiple neural networks together to form predictions based upon multiple input types (e.g., text, images, etc.). The initial structure of the neural networks (e.g., the number of neural networks, their respective types, number of layers, and neurons per layer, etc.) may also affect the manner in which the trained neural network processes the input and claims. Also, as noted above, the output produced by neural networks may be counter-intuitive and very complex. For illustrative purposes, intuitive and simplified examples will now be discussed in connection with FIG. 5.

FIG. 5 depicts text-based content of an example electronic claim record 500 which may be processed using an artificial neural network, such as neural network 300 of FIG. 3 or a different neural network generated by neural network unit 150 of FIG. 1 or neural network training application 264 of FIG. 2. The term “text-based content” as used herein includes printing (e.g., characters A-Z and numerals 0-9), in addition to non-printing characters (e.g., whitespace, line breaks, formatting, and control characters). Text-based content may be in any suitable character encoding, such as ASCII or UTF-8 and text-based content may include HTML.

Although text-based-content is depicted in the embodiment of FIG. 5, as discussed above, claim input data may include images, including hand-written notes, and the AI platform may include a neural network (or other artificial intelligence or machine learning algorithm or model) trained to recognize hand-writing and to convert hand-writing to text. Further, “text-based content” may be formatted in any acceptable data format, including structured query language (SQL) tables, flat files, hierarchical data formats (e.g., XML, JSON, etc.) or as other suitable electronic objects. In some embodiments, image and audio data may be fed directly into the neural network(s) without being converted to text first.

With respect to FIG. 5, electronic claim record 500 includes three sections 510a-510c, which respectively represent policy information, loss information, and external information. Policy information 510a may include information about the insurance policy under which the claim has been made, including the person to whom the policy is issued, the name of the insured and any additional insureds, the location of the insured, etc. Policy information 510a may be read, for example by input analysis unit 120 analyzing historical data such as historical data 108 and individual claims, such as claims 110-1 through 110-n. Similarly, vehicle information may be included in policy information 510a, such as a vehicle identification number (VW).

Additional information about the insured and the vehicle (e.g., make, model, and year of manufacture) may be obtained from data sources and joined to input data. For example, additional customer data may be obtained from customer data 160 or customer data 272, and additional vehicle data may be obtained from vehicle data 162 and vehicle data 274. In some embodiments, make and model information may be included in electronic claim record 500, and the additional lookup may be of vehicle attributes (e.g., the number of passengers the vehicle seats, the available options, etc.).

In addition to policy information 510a, electronic claim record 500 may include loss information 510b. Loss information generally corresponds to information regarding a loss event in which a vehicle covered by the policy listed in policy information 510a sustained loss, and may be due to an accident or other peril. Loss information 510b may indicate the date and time of the loss, the type of loss (e.g., whether collision, comprehensive, etc.), whether personal injury occurred, whether the insured made a statement in connection with the loss, the number of vehicle operators and/or passengers involved, whether traffic citations were issued, whether the loss was settled, and if so for how much money.

In some embodiments, more the than one loss may be represented in loss information 510b. For example, a single accident may give rise to multiple losses under a given policy, for example to two vehicles involved in a crash operated by vehicle operators not covered under the policy. In addition to loss information, electronic claim record 500 may include external information 510c, including but not limited to correspondence with the vehicle operator, statements made by the vehicle operator, etc. External information 510c may be textual, audio, or video information. The information may include file name references, or may be file handles or addresses that represent links to other files or data sources, such as linked data 520a-g. It should be appreciated that although only links 520a-g are shown, more or fewer links may be included, in some embodiments.

Electronic claim record 500 may include links to other records, including other electronic claim records. For example, electronic claim record 500 may link to notice of loss 520a , one or more photographs 520b , one or more audio recordings 520c , one or more investigator's reports 520d , one or more forensic reports 520e , one or more diagrams 520f , and one or more payments 520g . Data in links 520a -520g may be ingested by an AT platform such as AI platform 120. For example, as described above, each claim may be ingested and analyzed by input analysis unit 120.

AI platform 104 may include instructions which cause input analysis unit 120 to retrieve, for each link 520a -520g , all available data or a subset thereof. Each link may be processed according to the type of data contained therein; for example, with respect to FIG. 1, input analysis unit 120 may process, first, all images from one or more photograph 520b using image processing unit 124. Input analysis unit 120 may process audio recording 520c using speech-to-text unit 122.

In some embodiments, a relevance order may be established, and processing may be completed according to that order. For example, portions of a claim that are identified as most dispositive of risk may be identified and processed first. If, in that example, they are dispositive of pricing, then processing of further claim elements may be abated to save processing resources. In one embodiment, once a given number of labels is generated (e.g., 50) processing may automatically abate.

Once the various input data comprising electronic claim record 500 has been processed, the results of the processing may, in one embodiment, be passed to a text analysis unit, and then to neural network (or other artificial intelligence or machine learning algorithm or model). If the AI platform is being trained, then the output of input analysis unit 120 may be passed directly to neural network unit 150. The neurons comprising a first input layer of the neural network being trained by neural network unit 150 may be configured so that each neuron receives particular input(s) which may correspond, in one embodiment, to one or more pieces of information from policy information 510a , loss information 510b, and external information 510c . Similarly, one or more input neurons may be configured to receive particular input(s) from links 520a -520g.

In some embodiments, analysis of input entered by a user may be performed on a client device, such as client device 202. In that case, output from input analysis may be transmitted to a server, such as server 204, and may be passed directly as input to neurons of an already-trained neural network, such as a neural network trained by neural network training application 264.

In one embodiment, the value of a new claim may be predicted directly by a neural network model (or other artificial intelligence or machine learning algorithm or model) trained on historical data 108, without the use of any labeling. For example, a neural network (or other artificial intelligence or machine learning algorithm or model) may be trained such that input parameters correspond to, for example, policy information 510a , loss information 512b, external information 512c, and linked information 520a -520g.

The trained model may be configured so that inputting sample parameters, such as those in the example electronic claim record 500, may accurately predict, for example, the estimate of damage ($25,000) and settled amount ($24,500). In this case, random weights may be chosen for all input parameters.

The model may then be provided with a subset of training data from claims 110-1 through 110-n, which are each pre-processed by the techniques described herein with respect to FIGS. 1 and 2 to extract individual input parameters. The electronic claim record 500 may then be tested against the model, and the model trained with new training data claims, until the predicted dollar values and the correct or “truth” dollar values converge.

In one embodiment, the AI platform may modify the information available within an electronic claim record. For example, the AI platform may predict a series of labels as described above that pertain to a given claim. The labels may be saved in a risk indication data store, such as loss data 142 with respect to FIG. 1. Next, the labels and corresponding weights, in one embodiment, may be received by loss reserve aggregation platform 106, where they may be used in conjunction with base rate information to predict a claim loss value. Claims labeled with historical loss amounts may be used as training data.

In some embodiments, information pertaining to the claim, such as the coverage amount and vehicle type from policy information 510a , may be passed along with the labels and weights to loss reserve aggregation platform 106 and may be used in the computation of a gross or net claim loss value. After the aggregated loss reserve is computed, it may be associated with the claim, for example by writing the amount to the loss information section of the electronic claim record (e.g., to the loss information section 510b of FIG. 5).

As noted above, the methods and systems described herein may be capable of analyzing decades of electronic claim records to build neural network models, and the formatting of electronic claim records may change significantly from decade to decade, even year to year. Therefore, it is important to recognize that the flexibility built into the methods and systems described herein allows electronic claim records in disparate formats to be consumed and analyzed. Additionally, unlike human actuaries, who may naturally weight the most recently-analyzed information most heavily, and may not recall all information analyzed, the computerized methods described may treat all claims equally, regardless of temporal ordering, and may have practically unlimited memory capacity.

Exemplary Computer-Implemented Methods

Turning to FIG. 6, an exemplary computer-implemented method 600 for determining a risk level posed by an operator of a vehicle is depicted. The method 600 may be implemented via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices. The method 600 may include training a neural network (or other artificial intelligence or machine learning algorithm or model) to identify risk factors within electronic vehicle claim records (e.g., by an AI platform such as AI platform 104 training a neural network (or other artificial intelligence or machine learning algorithm or model) by an input analysis unit 120 processing data before passing the results of the analysis to a training unit 152 that uses the results to train a neural network model (or other artificial intelligence or machine learning algorithm or model)) (block 610).

The method 600 may include receiving information corresponding to the vehicle by an AI platform (e.g., the AI platform 104 may accept input data such as input data 102 and may process that input by the use of an input analysis unit such as input analysis unit 120) (block 620). The method 600 may include analyzing the information using the trained neural network (or other artificial intelligence or machine learning algorithm or model) (e.g., a risk indication unit 154 applies the output of the input analysis unit 120 to trained neural network model) to generate one or more risk indicators corresponding to the information (e.g., the neural network produces a plurality of labels and/or corresponding weights) (block 630) which are used to determine a risk level corresponding to the vehicle based upon the one or more risk indicators (e.g., risk indications are stored in risk indication data 142, and/or passed to risk level analysis platform 106 for computation of a risk level, which may be based upon weights also generated by the trained neural network (or other artificial intelligence or machine learning algorithm or model)) (block 640). The method may include additional, less, or alternate actions, including those discussed elsewhere herein.

Turning to FIG. 7, a flow diagram for an exemplary computer-implemented method 700 of determining risk indicators from vehicle operator information. The method 700 may be implemented by a processor (e.g., processor 250) executing, for example, a portion of AI platform 104, including input analysis unit 120, pattern matching unit 128, natural language processing unit 130, and neural network unit 150. In particular, the processor 220 may execute an input data collection application 216 and an input device 222 to cause the processor 225 to acquire application input 710 from a user of a client 202.

The processor 220 may further execute the input data collection application 216 to cause the processor 220 to transmit application input 710 from the user via network interface 214 and a network 206 to a server (e.g., server 204). Processor 250 of server 204 may cause module 254 of server 204 to process application input 710. Input analysis application 260 may analyze application input 710 according to the methods describe above. For example, vehicle information maybe queried from a vehicle data such as vehicle data 274. A VIN number in application input 710 may be provided as a parameter to vehicle data 274.

Vehicle data 274 may return a result indicating that a corresponding vehicle was found in vehicle data 274, and that it is a gray minivan that is one year old. Similarly, the purpose provided in application input 710 may be provided to a natural language processing unit (e.g., NLP unit 130), which may return a structured result indicating that the vehicle is being driven by a person who is an employed student athlete. The result of processing the application input 710 may be provided to a risk level unit (e.g., risk level unit 140) which will apply the input parameters to a trained neural network model (or other artificial intelligence or machine learning algorithm or model).

In one embodiment, the trained neural network model (or other artificial intelligence or machine learning algorithm or model) may produce a set of labels and confidence factors 720. The set of labels and confidence factors 720 may contain labels that are inherent in the application input 710 (e.g., LOW-MILEAGE) or that are queried based upon information provided in the application input 710 (e.g., MINIVAN, based upon VIN). However, the set of labels and confidence factors 720 may include additional labels (e.g., COLLISION and DEER) that are not evident from the application input 710 or any related/queried information. After being generated by the neural network (or other artificial intelligence or machine learning algorithm or model), the set of labels and confidence factors 720 may then be saved to an electronic database such as risk indication data 276, and/or passed to a risk level analysis platform 106, whereupon a total risk may be computed and used in a pricing quote provided to the user of client 202.

It should be appreciated that many more types of information may be extracted from the application input 710 (e.g., from example links 520a -520g as shown in FIG. 5). In one embodiment, the pricing quote may be a weighted average of the products of label weights and confidences. The method 700 may be implemented, for example, in response to a vehicle operator accessing client 202 for the purpose of applying for an insurance policy, or adding (via an application) an additional insured to an existing policy. The method may include additional, less, or alternate actions, including those discussed elsewhere herein.

With respect to FIG. 8, a flow diagram for an exemplary computer-implemented method 800 of detecting and/or estimating damage to personal property is depicted, according to an embodiment. The method 800 may be implemented, for instance, by a processor (e.g., processor 250) executing, for example, a portion of AI platform 104, including input analysis unit 120, pattern matching unit 128, natural language processing unit 130, and neural network unit 150. In particular, the processor 250 may execute an input analysis application 260 to cause processor 250 to receive free-form text or voice/speech associated with a submitted insurance claim for a damaged insured vehicle (block 802). The method may include identifying one or more key words within the free-form text or voice/speech (block 804). The identification of key words within free-form text may be performed by a module of AI platform 104 (e.g., by text analysis unit 126, pattern matching unit 128, and/or natural language processing unit 130). The identification of key words within voice/speech may be performed by, for example, speech-to-text unit 122. The method may further include determining a cause of loss and/or peril that caused damage to the damaged insured vehicle (block 806). A cause of loss and/or peril may be chosen from a set of causes of loss known to the insurer (e.g., a set stored in risk indication data 142) or may be identified or generated by risk indication unit 154.

In some embodiments, the free-form text may be associated with a webpage or user interface of a client device accessed by a customer or employee of the proprietor of AI system 104 (e.g., an insurance agent) or by a user interface of an intranet page accessed by an employee of a call center. For example, the free-form text may be entered by a person utilizing input device 222 and display 224 of client device 202, and the input may be caused to be collected by processor 210 executing instructions in input data collection application 216. Voice/speech of a user may be collected by processor 210 causing instructions in input data collection 216 to be executed which read audio signals from an input device such as a microphone. In one embodiment, free-form text or voice/speech may be input to server device 204 via other means (e.g., directly loaded onto server device 204). In some embodiments, a neural network (or other artificial intelligence or machine learning algorithm or model) may be trained (e.g., by neural network training unit 264) to identify, or determine, a key word (or words) associated with a cause of loss and/or peril using free-form text or voice/speech and a type corresponding to the insured vehicle as training data. For example, multiple neural networks may be trained that individually correspond to multiple different respective vehicle types and sets of free-form text or voice/speech.

In one embodiment, the machine learning algorithms may be dynamically or continuously trained (i.e., trained online) to dynamically update a set of key words associated with respective cause of loss and/or peril information. The cause of loss and/or peril information may be similarly dynamically updated. Such a dynamic set may be stored and updated in an electronic database, such as risk indication data 276.

In one embodiment, a first cause of loss and/or first a peril may be identified, and an image may be received. For example, a user may capture an image, e.g., a digital image, of a vehicle (e.g., a vehicle that is damaged and/or insured) via image sensor 220, or other type of camera. The image may be collected by module 212 and transmitted via network interface 214 and network 206 to network interface 256, whereupon the image may be analyzed by input analysis application 260. The image may be input to neural network unit 150 and passed to a trained neural network model or algorithm (or other artificial intelligence or machine learning algorithm or model), which may analyze the image determine a second cause of loss and/or second peril. Then, the first cause of loss and/or peril e.g., that were identified in a free-form submission, such as a claim) may be compared to the second cause of loss and/or peril corresponding to the image, to verify the accuracy of the submitted claim and/or to identify potential fraud or inflation of otherwise legitimate claims. In some embodiments the image received via image sensor 220 may be analyzed to estimate damages, in terms of cost and/or severity.

Repair and replacement cost may be determined, in one embodiment, by training a. neural network model (or other artificial intelligence or machine learning algorithm or model) to accept an image of a damaged vehicle, and to output an estimate of the severity or cost of damages, repair, and/or replacement cost. Such models may be trained using the methods described herein including, without limitation, using a subset of historical data 108 as training data.

In some embodiments, an insurance policy associated with the damaged insured vehicle may be received or retrieved. The cause of loss and/or peril may be analyzed to determine whether the cause of loss and/or peril are covered under the insurance policy. :For example, a user of client device 202 may be required to login to an application in module 212 using a username or password. The user may be prompted to upload an image of a damaged vehicle during the claims submission process by the application in module 212, and the user may do so by capturing an image of a damaged vehicle the user owns via image sensor 220. The image, and an indication of the user's identity, may be transmitted via network 206 to server device 204.

Server device 204 may determine the cause of loss as described above by analyzing the image, and may retrieve an insurance policy corresponding to the user by querying, for example, customer data 272. Server 204 may contain instructions that cause the cause of loss or peril associated with the uploaded image to be analyzed in light of the insurance policy. The insurance policy may be machine readable, such that the cause of loss and peril information is directly comparable to the insurance policy.

In one embodiment, another means of comparison may be employed (e.g., a deep learning or Bayesian approach). Server 204, or more precisely an application executing in server 204, may then determine whether or not, or to what extent, the cause of loss associated with the image captured by the user is covered under the user's insurance policy. In one embodiment, an indication of the coverage may be transmitted to the user (e.g., via network 206). The causes of loss, perils, and key words/ concepts that may be identified and/or determined by the above-described methods include, without limitation: collision, comprehensive, bodily injury, property damage, liability, medical, rental, towing, and ambulance.

FIG. 9A is an example flow diagram depicting an exemplary computer-implemented method 900 of determining damage to personal property, according to one embodiment. The method 900 may include inputting historical claim data into a machine learning algorithm, or model, to train the algorithm to identify an insured vehicle, a type of insured vehicle, vehicle features or characteristics, a peril associated with the vehicle, and/or a cost associated with the vehicle (block 902), The method 900 may be implemented by a processor processor 250) executing, for example, a portion of AI platform 104, including input analysis unit 120, and/or otherwise implemented via, for instance, one or more processors, sensors, servers, and/or transceivers. Processor 250 may execute an input analysis application 260 to cause processor 250 to receive an image of the damaged insured vehicle (block 904).

The method may further include inputting an image of the damaged insured vehicle into the trained machine learning algorithm to identify a type of insured vehicle, vehicle features or characteristics, peril associated with the vehicle, and/or a cost associated with the vehicle. A type of vehicle may include any attribute of the vehicle, including without limitation, whether the body type (e.g., coupe, sedan), make, model, model year, options (e.g., sport package), whether the vehicle is autonomous or not, etc. In some embodiments, the features and characteristics may include an indication of whether the vehicle includes autonomous or semi-autonomous technologies or systems. In some embodiments, the peril associated with the damaged insured vehicle may comprise collision, comprehensive, tire, water, smoke, hail, wind, or storm surge.

In one embodiment, an insurance policy associated with the damaged insured vehicle may he retrieved by AI platform 104, for example, from customer data 160, and the type of peril compared to the insurance policy to determine whether or not the peril is a covered peril under the insurance policy. As noted above, the applicable policy may be identified by a user identification passed from a client device, but in some embodiments, the applicable policy may be identified by other means. For example, a VIN number or license plate may be digitized by optical character recognition (e.g., by image processing unit 124) from the image provided to the Al platform 104, and the digitization used to search customer data 160 for a matching insurance policy.

FIG. 9B is an example data flow diagram depicting an exemplary computer-implemented method 910 of determining damage to an insured vehicle using a trained machine learning algorithm to facilitate handling an insurance claim associated with the damaged insured vehicle, according to one embodiment. The method 910 may be implemented, for instance, via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices.

The method 910 may include receiving a photograph of a damaged insured vehicle 912. The image may be received by, for example, image processing unit 124 of AI platform 104. The image may originate in a sensor of a client device, such as image sensor 220 of client device 202, and may be captured in response to an action taken by a user, such as the user pressing a user interface button (e.g., a button or screen element of input device 222). The photograph may be analyzed by image processing unit 124 (e.g., sharpened, contrasted, or converted to a dot matrix) before being passed to neural network unit 150, where it may be input to a trained machine learning algorithm, or neural network model (block 914). The trained neural network model in block 914 may correspond to the machine learning algorithm trained in block 904 of FIG. 9A. The method may include identifying information 916 which may include a type of the damaged insured vehicle, a respective feature or characteristic of the damaged insured vehicle, a peril associated with the damaged insured vehicle, and/or a repair or replacement cost associated with the damaged insured vehicle. The information 916 may he used to facilitate handling an insurance claim associated with the damaged insured vehicle.

FIG. 10A is an example flow diagram depicting an exemplary computer-implemented method 1000 for determining damage to personal property, according to one embodiment. The method 1000 may be implemented, for instance, via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices.

The method 1000 may include inputting historical claim information into a machine learning algorithm, or model, to train the algorithm to develop a risk profile for an undamaged insurable vehicle based upon a type, feature, and/or characteristic of the vehicle (block 1002). The type, feature, and/or characteristic of the vehicle may include an indication of the geographic area of the vehicle, the vehicle make or model, information about the vehicle's transmission, information about the type and condition of the vehicle's tires, information about the vehicle's engine, information pertaining to whether the vehicle includes autonomous or semi-autonomous features, information about the vehicle's air conditioning or lack thereof, information specifying whether the vehicle has power brakes and windows, and the color of the vehicle. The method may further include receiving an image of an undamaged insurable vehicle (block 1004). The method may further include inputting the image of the undamaged insurable vehicle into a machine learning algorithm to identify a risk profile for the undamaged insurable vehicle (block 1006).

A risk profile may include a predicted loss amount, likelihood of loss, or a risk relative to other vehicles. For example, for a minivan may be lower than a risk profile for a sports car. Similarly, the risk of being rear-ended in a sports car may be lower than the risk of being rear-ended in a minivan. A risk profile may also include multiple risks with respect to one or more peril (e.g., respective risks for collision, liability, and comprehensive) in addition to an overall, or aggregate, risk profile.

The risk profile may include an indication of behaviors and/or vehicle features that may be adopted to lower aggregate risk. For example, the risk profile may indicate that upgrading a vehicle to include a rear-facing camera may lower risk by a certain percentage, or that trading a vehicle of a first model year to a vehicle of a second model year may result in an insurance premium discount with respect to the risk level or underwriting price of the first model year.

Such determinations may be based upon a vehicle owner making smaller, more granular changes. For example, a neural network (or other artificial intelligence or machine learning algorithm or model) may determine that such discounts may be available to a hybrid or electric vehicle owner by the vehicle owner charging the vehicle battery to a greater than or equal level (e.g., >=60%), or up/downgrading the firmware of an onboard computer from a first version to a second version

In some embodiments, the methods and systems herein may prompt a vehicle operator to improve their risk profile, and/or reduce an insurance premium linked to such a profile, by adopting certain behaviors. For example, in vehicles wherein driving automation or dynamic driving is user-selectable, or optional, a driver may be encouraged to activate (or deactivate) automated driving capabilities (e.g., steering control). It will be appreciated by those skilled in the art that the foregoing are intended to be simple examples for purposes of illustration, and that more complex embodiments are envisioned.

FIG. 10B is an example data flow diagram depicting an exemplary computer-implemented method 1010 of using a trained machine learning algorithm to facilitate generating an insurance quote for an undamaged insurable vehicle, according to one embodiment. The method 1010 may be implemented, for instance, via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices.

The method may include receiving an image, or photograph, of an undamaged vehicle 1012. The photograph may originate in a client device, such as client 202, and may be captured and transmitted to a server via the methods described above. The method 1010 may include inputting the image of an undamaged vehicle into a trained machine learning algorithm 1014. The trained neural network (or other artificial intelligence or machine learning algorithm or model) may correspond to the neural network trained block 1002 of FIG. 10A, and the machine learning algorithm may be trained using historical claim information corresponding to historical data 108 of FIG. 1. The neural network may be configured to accept historical claim data and to predict damage amounts, or other risks.

The method may include inputting the image of the undamaged insurable vehicle into the trained machine learning algorithm to identify a risk profile for the undamaged insurable vehicle, wherein the risk profile may correspond to the risk profile described above with respect to block 1006. It should be appreciated that the use of neural networks may cause variables to emerge from large data sets that are not expected, but which are highly correlated to risk. In some cases, the risk profile associated with a given vehicle may contain information that seems unforeseeable and/or counter-intuitive.

In one embodiment, the risk profile described above may be used to generate an insurance policy and/or determine a rate quotation corresponding to the undamaged insurable vehicle wherein the policy and/or rate are based upon the risk profile. In one embodiment, the rate may include a usage-based insurance (UBI) rate. In some embodiments, the generated insurance policy and/or rate quotation may be transmitted to the vehicle owner for a review and/or approval process. For example, a user of client device 202 may submit an image of their vehicle via processor 210 and module 212, and the above-described analysis involving the trained neural network model (or other artificial intelligence or machine learning algorithm or model) may then take place on server 204. Then, when a rate quote or policy is generated on the server, the quote or policy may be transmitted by network interface 256 to network 206 and ultimately to network interface 214, back on the client.

The client may include an application in module 212 which causes the policy or rate to be displayed to the user of client 202 (e.g., via display 224), and the user may review the policy/quote, and may be prompted to enter (e.g., via input device 222) their approval with the terms of the policy/quote. The user's approval may be transmitted back to the server 204 via network 206, and a contract for insurance formed. In this way, a user may successfully register for an insurance policy covering an insurable vehicle, by capturing an image of the vehicle, uploading the image of that vehicle, and reviewing a policy corresponding to that vehicle that has been generated by a neural network model (or other artificial intelligence or machine learning algorithm or model) analyzing the image, wherein the neural network model (or other artificial intelligence or machine learning algorithm or model) has been trained on historical claim data and/or images of similar vehicles, according to at least one preferred embodiment.

Turning to FIG. 11, an exemplary user interface environment 1100 for training and operating artificial neural network models (or other types of artificial intelligence or machine learning algorithms or models) is depicted, according to one embodiment and scenario. User interface environment 1100 may include a user interface 1110, which may be implemented in a web browser, mobile application, or other suitable user interface display program. User interface 1110 may correspond to loss reserve client 216, and may be executing in memory 208, and may be displayed in display 224. A user may interact with user interface 1110 via input device 222.

User interface 1110 may include pages/sections 1112-A through 1112-D. Section 1110-A may allow the user to select an existing data set, and may include a button or other suitable graphical user interface (GUI) element which, when pressed, causes a request to be transmitted to a server (e.g., server 204) including an indication of the user's selected data set(s). The server may query an electronic database, retrieve the selected data set(s), and train a model based upon the selection. The user of section 1112-A may then be redirected to a result page, such as section 1112-D, wherein the results of operating the trained model using the selected data set(s) may be displayed. Section 1112-B may allow the user to specify a custom query (e.g., in Structured Query Language or another suitable query language) of an electronic database for records (e.g., claim, user, and/or vehicle records), along with a button or other suitable GUI element.

In this way, a user may train a model using arbitrarily complex subsets and/or aggregations of claim, user, and vehicle data. Once the user activates the “Train” button in section 1112-B, the model may be trained using a data set corresponding to the user's custom query. The model may then be added to the “Trained Models” list of section 1112-C, and the user may be directed to section 1112-D, wherein the user may view the results of the query being fed to the trained loss reserving model. The training that occurs when a user activates the “Train” button in sections 1112-A and 1112-B may be fully automated, including the validation steps.

Section 1112-C may be a list of all trained models. The user may edit or operate the individual models by interacting with user interface 1100. Section 1112-D may be a results page which lists the output of executing a trained neural network model (or other artificial intelligence or machine learning algorithm or model). For example, as depicted, section 1112-D displays a first output, representative of executing a model trained using a data set containing all historical passenger car claim records, and a second output, representative of executing a model trained using a complex data set including motorcycle claims relating to mopeds, sport bikes, and tricycles. The first output includes an indication of the data set used and a loss reserve amount. The second output includes an indication of a complex data set used including three subsets, each having a respective loss reserve amount which is aggregated into a loss reserve aggregate. It should be understood that additional standard scaffolding may be included in some embodiments, for example, to create, update, delete, and retrieve trained models. In some embodiments, the structure of neural networks (or other artificial intelligence or machine learning algorithms or models), and the parameters used in their creation, may be accessible via loss reserving client 1100.

With regard to FIG. 12, an exemplary method 1200 of determining loss reserves is depicted, according to an embodiment. Method 1200 may include receiving a plurality of labeled historical claim documents (block 1210). The labels may be a claim payout amount, a claim pendency time, a number indicating whether the loss reserve was adequate or inadequate, and any shortfall or surplus that was associated with the loss reserves allocated before the claim was settled. Method 1200 may include normalizing the claim loss/payout/settlement amount (block 1220). Normalization may include converting the claim amount into a standard currency (e.g., USD) and/or adjusting the claim settlement amount for inflation or other circumstances related to monetary policy.

Method 1200 may include training an artificial neural network (or other artificial intelligence or machine learning algorithm or model) using the historical claim documents (block 1230). Training the artificial neural network may include creating a neural network having a plurality of input neurons in an input layer, and a plurality of hidden layers, each having a respective number of neurons. The neural network may be dense and interconnected, and may have an output layer having one or more output neurons. A subset of the labeled historical claims may be used to train the neural network (or other artificial intelligence or machine learning algorithm or model) to predict an optimal loss reserve for a type of claim (e.g., a motorcycle claim) or across all claim types. An optimal loss reserve may be neither too large nor too small, with respect to historical claim settlement amounts. A validation set of historical claims may be held back for testing the trained neural network (or other trained artificial intelligence or machine learning algorithm or model) for accuracy.

Method 1200 may include receiving a user claim (block 1240). The user claim may be submitted by the user via an application, such as an application executing in module 212 of client device 202. In some embodiments, a user claim may be retrieved from historical data 108. The user claim may correspond to electronic claim record 500. A plurality of attributes of claims (e.g., payments, type of loss, policy deductible, etc.) may be used to train the neural network (or other artificial intelligence or machine learning algorithm or model) and, the same attributes respective to the user claim may be provided as inputs to the neural network (or other artificial intelligence or machine learning algorithm or model) by applying the user claim to the neural network (or other artificial intelligence or machine learning algorithm or model) to predict a loss reserve amount (block 1250). In some embodiments, additional or fewer steps may be used, and in some embodiments, loss reserving models may be created that apply to a specific type of claim, vehicle, and/or customer.

Turing to FIG. 13, an exemplary method 1300 of training and executing artificial neural networks using a customized data set is depicted, according to one embodiment. Method 1300 may include receiving an indication of a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) and a data set (block 1310). The indication may be a pair of integers or other values respectively uniquely identifying a trained neural network (or other artificial intelligence or machine learning algorithm or model) and a data set. The trained neural network (or other artificial intelligence or machine learning algorithm or model) may have been trained in advance by a user, using, for example, loss reserving application 216.

In one embodiment, the neural network (or other artificial intelligence or machine learning algorithm or model) may have been trained using module 254 (e.g., using command line tools by a user accessing server device 204). The data set may be a pre-existing labeled data set that is listed on a user interface and selectable by the user, or may be built by the user by the user entering an SQL expression into an input box (e.g., via input device 222 and display 224). The user may press a button, in response to which, method 1300 may transmit an execution request including the indication to a remote computing device (e.g., server 204).

Server 204 may include instructions for receiving the indication, selecting the appropriate neural network (or other artificial intelligence or machine learning algorithm or model) and data set, applying the data set to the neural network (or other artificial intelligence or machine learning algorithm or model), and returning execution output including at least identification of the selected trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) and a loss reserve amount produced via operation of the selected trained artificial neural network (or other artificial intelligence or machine learning algorithm or model).

An application such as loss reserve client 216 may receive the execution result/output (block 1330) and may display the output of the trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) (block 1340). In some embodiments, the data set may be a compound data set, and the output of the trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be a result that includes individual loss reserving amounts with respect to a plurality of data subsets, and/or an aggregate loss reserving amount applicable to all of the plurality of data subsets.

Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions are possible, which may include additional or fewer features. For example, additional knowledge may be obtained using identical methods. The labeling techniques described herein may be used in the identification of fraudulent claim activity. The techniques may be used in conjunction with co-insurance to determine the relative risk of pools of customers. External customer features, such as payment histories, may be taken into account in pricing risk. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions described herein.

Machine Learning & Other Matters

The computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein. The methods may be implemented via one or more local or remote processors, transceivers, servers, and/or sensors (such as processors, transceivers, servers, and/or sensors mounted on drones, vehicles or mobile devices, or associated with smart infrastructure or remote servers), and/or via computer-executable instructions stored on non-transitory computer-readable media or medium.

Additionally, the computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein. The computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium.

A processor or a processing element may be trained using supervised or unsupervised machine learning, and the machine learning program may employ a neural network, which may be a convolutional neural network, a deep learning neural network, a reinforcement or reinforced learning algorithm or model, or a combined learning module or program that learns in two or more fields or areas of interest, In some embodiments, deep learning strategies may be applied, in addition to random forest trees for classification. Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. For instance, machine learning may involve identifying and recognizing patterns in existing text or voice/speech data in order to facilitate making predictions for subsequent data. Voice recognition and/or word recognition techniques may also be used. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs.

Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as drone, autonomous or semi-autonomous drone, image, mobile device, smart or autonomous vehicle, and/or intelligent vehicle telematics data. The machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition, and may be trained after processing multiple examples. The machine learning programs may include deep, combined, or reinforced learning algorithms or models, Bayesian program learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing either individually or in combination. The machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or other types of machine learning, such as deep learning, combined learning, and/or reinforced learning.

Supervised or unsupervised machine learning may also be employed. In supervised machine learning, a processing element may be provided with example inputs and their associated outputs, and may seek to discover a general rule that maps inputs to outputs, so a when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output. In unsupervised machine learning, the processing element may he required to find its own structure in unlabeled example inputs.

Additional Considerations

With the foregoing, any users (e.g., insurance customers)whose data is being collected and/or utilized may first opt-in to a rewards, insurance discount, or other type of program. After the user provides their affirmative consent, data may be collected from the user's device (e.g., mobile device, smart or autonomous vehicle controller, smart vehicle controller, or other smart devices). In return, the user may be entitled insurance cost savings, including insurance discounts for auto, homeowners, mobile, renters, personal articles, and/or other types of insurance.

In other embodiments, deployment and use of neural network models at a user device (e.g., the client 202 of FIG. 2) may have the benefit of removing any concerns of privacy or anonymity, by removing the need to send any personal or private data to a remote server (e.g., the server 204 of FIG. 2).

The following additional considerations apply to the foregoing discussion. Throughout this specification, plural instances may implement operations or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.

The patent claims at the end of this patent application are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being explicitly recited in the claim(s). The systems and methods described herein are directed to an improvement to computer functionality, and improve the functioning of conventional computers.

Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.

As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present and both A and B are true (or present).

In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.

Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.

Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.

In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.

Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.

Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory product to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory product to retrieve and process the stored output. Hardware modules may also initiate communications with input or output products, and can operate on a resource (e.g., a collection of information),

The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.

Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a vehicle environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.

The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a vehicle environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.

Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context.

Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for the method and systems described herein through the principles disclosed herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.

Claims

1. A computer-implemented method of predicting loss reserves, the method comprising receiving, in a computing device, a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount;

normalizing, via one or more processors, each respective claim loss amount;
training an artificial neural network, wherein training the artificial neural network includes applying the plurality of historical electronic claim documents to the artificial neural network;
receiving, in the computing device, a user claim, the user claim comprising free-form text and an image; and
predicting a loss reserve amount, by applying the user claim to the trained artificial neural network, wherein the predicting a loss reserve amount comprises: determining a first cause of loss by applying the trained artificial neural network to the free-form text of the user claim; determining a second cause of loss by applying the trained artificial neural network to the image of the user claim; and predicting the loss reserve amount using the trained artificial neural network based at least in part upon the first cause of loss and the second cause of loss.

2. The computer-implemented method of claim 1, wherein the user claim is a first user claim, and the loss reserve amount is a first loss reserve amount, the method further comprising

receiving a second user claim;
predicting a second loss reserve amount, by applying the second user claim to the trained artificial neural network; and
computing an aggregate loss reserve amount by analyzing the first loss reserve amount and the second loss reserve amount.

3. The computer-implemented method of claim 2, wherein computing an aggregate loss reserve amount by analyzing the first loss reserve amount and second loss reserve amount includes one or both of (i) summing the first loss reserve amount and the second loss reserve amount, and (ii) summing the absolute value of the first loss reserve amount and the second loss reserve amount.

4. The computer-implemented method of claim 2, further comprising:

setting aside funds in the amount of the aggregated loss reserve amount.

5. The computer-implemented method of claim 2, wherein training the artificial neural network, wherein training the artificial neural network includes applying the plurality of electronic claim documents to the artificial neural network includes applying a first subset of the plurality of electronic claim documents to the artificial neural network, and iteratively training the artificial neural network until loss of the network is less than a maximum value.

6. The computer-implemented method of claim 2, further comprising:

determining one or both of (i) network loss of the trained artificial neural network, and (ii) network accuracy of the trained artificial neural network with respect to the plurality of historical electronic claim documents.

7. The computer-implemented method of claim 2, further comprising:

generating, via the one or more processors, at least a portion of a financial report, the at least the portion of the financial report including at least one type of the plurality of historical electronic claim documents, and the loss reserve amount.

8. The computer-implemented method of claim 7, wherein the type of the plurality of historical electronic claim documents and the loss reserve amount are displayed in a tabular format.

9. The computer-implemented method of claim 2, wherein the loss reserve amount is a first loss reserve amount, the method further comprising:

receiving, in the computing device, a settled claim;
updating the trained artificial neural network, wherein updating the trained artificial neural network includes applying the settled claim to the artificial neural network; and
predicting a second loss reserve amount, by applying the user claim to the trained artificial neural network.

10. The computer-implemented method of claim 9, further comprising comparing the first loss reserve amount to the second loss reserve amount to determine an impact of the settled claim.

11. A loss reserving system comprising:

a computing system having one or more network devices,
one or more processors,
an electronic display device having a plurality of display sections; and
a loss reserving application comprising a set of computer-executable instructions stored on one or more memories, wherein the set of computer-executable instructions, when executed, by the one or more processors, cause the loss reserving system to:
receive, in an application, an indication of a user including one or both of (i) a trained artificial neural network, and (ii) a data set;
transmit, via the one or more network devices, a request including the one or both of (i) the trained artificial neural network, and (ii) the data set, to a remote computing device;
receive, from the remote computing device, an output of the trained artificial neural network, the output including an identification of the data set and a loss reserve amount corresponding to the data set; and
display, in one of the plurality of display sections of the electronic display device, the output of the trained artificial neural network.

12. The loss reserving system of claim 11, wherein the computer-executable instructions further cause the loss reserving system to:

receive, in the application, a user query; and
retrieve, from an electronic database, the data set, wherein the data set corresponds to the user query.

13. A computing system comprising:

one or more processor; and
one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to:
receive, in a computing device, a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount;
normalize, via one or more processors, each respective claim loss amount;
train an artificial neural network, wherein training the artificial neural network includes applying the plurality of historical electronic claim documents to the artificial neural network;
receive, in the computing device, a user claim, the user claim comprising free-form text and an image;
determine a first cause of loss by applying the trained artificial neural network to the free-form text of the user claim;
determine a second cause of loss by applying the trained artificial neural network to the image of the user claim; and
predict a loss reserve amount, by applying the user claim to the trained artificial neural network based at least in part upon the first cause of loss and the second cause of loss.

14. The computing system of claim 13, wherein the user claim is a first user claim, and the loss reserve amount is a first loss reserve amount, and the instructions further cause the computing system to:

receive a second user claim;
predict a second loss reserve amount, by applying the second user claim to the trained artificial neural network; and
compute an aggregate loss reserve amount by analyzing the first loss reserve amount and the second loss reserve amount.

15. The computing system of claim 14, wherein the instructions further cause the computing system to:

one or both of (i) sum the first loss reserve amount and the second loss reserve amount, and (ii) sum the absolute value of the first loss reserve amount and the second loss reserve amount.

16. The computing system of claim 14, wherein the instructions further cause the computing system to:

set aside funds in the amount of the aggregated loss reserve amount.

17. The computing system of claim 13, wherein the instructions further cause the computing system to:

apply a first subset of the plurality of electronic claim documents to the artificial neural network; and
train the artificial neural network iteratively until loss of the network is less than a maximum value.

18. The computing system of claim 13, wherein the instructions further cause the computing system to:

determine one or both of (i) network loss of the trained artificial neural network, and (ii) network accuracy with respect to the plurality of historical electronic claim documents of the trained artificial neural network.

19. The computing system of claim 13, wherein the instructions further cause the computing system to:

generate, via the one or more processors, at least a portion of a financial report, the at least the portion of the financial report including at least one type of the plurality of historical electronic claim documents, and the loss reserve amount.

20. The computing system of claim 13, wherein the loss reserve amount is a first loss reserve amount, and wherein the instructions further cause the computing system to:

receive, in the computing device, a settled claim;
update the trained artificial neural network by applying the settled claim to the artificial neural network; and
predict a second loss reserve amount, by applying the user claim to the trained artificial neural network.
Patent History
Publication number: 20210287297
Type: Application
Filed: Sep 20, 2018
Publication Date: Sep 16, 2021
Applicant: State Farm Mutual Automobile Insurance Company (Bloomington, IL)
Inventors: Gregory L Hayward (Bloomington, IL), Meghan Sims Goldfarb (Bloomington, IL), Nicholas U. Christopulos (Bloomington, IL), Erik Donahue (Normal, IL)
Application Number: 16/136,401
Classifications
International Classification: G06Q 40/08 (20060101); G06N 3/08 (20060101);