COMPONENT BASED INTERFACE TO HANDLE TASKS DURING CLAIM PROCESSING
A computer program is provided for developing component based software capable of handling insurance-related tasks. The program includes a data component that stores, retrieves and manipulates data utilizing a plurality of functions. Also provided is a client component that includes an adapter component that transmits and receives data to/from the data component. The client component also includes a business component that serves as a data cache and includes logic for manipulating the data. A controller component is also included which is adapted to handle events generated by a user utilizing the business component to cache data and the adapter component to ultimately persist data to a data repository. In use, the client component is suitable for receiving a plurality of tasks that achieve an insurance-related goal upon completion, allowing users to add new tasks that achieve the goal upon completion, allowing the users to edit the tasks, and generating a historical record of the tasks that are completed.
Latest Accenture LLP Patents:
- SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR PROVIDING TAX SERVICES IN A NETWORK-BASED TAX ARCHITECTURE
- CONTENT FEEDBACK IN A MULTIPLE-OWNER CONTENT MANAGEMENT SYSTEM
- User interface for a financial modeling system
- Account and customer creation in an on-line banking model
- System for calculating a support payment in a network-based child support framework
The present invention relates to task management and more particularly to handling task during insurance claim processing utilizing a computer system.
BACKGROUND OF THE INVENTIONComputers have become a necessity in life today. They appear in nearly every office and household worldwide. A representative hardware environment is depicted in prior art
Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP. A need exists for these principles of OOP to be applied to a messaging interface of an electronic messaging system such that a set of OOP classes and objects for the messaging interface can be provided.
OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to, differentiate between an object and a class of objects at this point. An object is a single instance of the class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.
OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
OOP also allows creation of an object that “depends from” another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine “depends from” the object representing the piston engine. The relationship between these objects is called inheritance.
When the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons. Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, the logical perception of the reality is the only limit on determining the kinds of things that can become objects in object-oriented software. Some typical categories are as follows:
-
- Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
- Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.
- An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.
- An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane.
With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% of the new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% of the program. As a result, OOP enables software developers to build objects out of other, previously built objects.
This process closely resembles complex machinery being built out of assemblies and sub-assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality of the software as well as an increased speed of its development.
SUMMARY OF THE INVENTIONA computer program is provided for developing component based software capable of handling insurance-related tasks. The program includes a data component that stores, retrieves and manipulates data utilizing a plurality of functions. Also provided is a client component that includes an adapter component that transmits and receives data to/from the data component. The client component also includes a business component that serves as a data cache and includes logic for manipulating the data. A controller component is also included which is adapted to handle events generated by a user utilizing the business component to cache data and the adapter component to ultimately persist data to a data repository. In use, the client component is suitable for receiving a plurality of tasks that achieve an insurance-related goal upon completion, allowing users to add new tasks that achieve the goal upon completion, allowing the users to edit the tasks, and generating a historical record of the tasks that are completed.
The foregoing and other objects, aspects and advantages are better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings, in which:
Prior Art
Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymorphism, and composition-relationship. With the advent of the C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
The benefits of object classes can be summarized, as follows:
-
- Objects and their corresponding classes break down complex programming problems into many smaller, simpler problems.
- Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
- Subclassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.
- Polymorphism and multiple inheritance make it possible for different programmers to mix and match characteristics of many different classes and create specialized objects that can still work with related objects in predictable ways.
- Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real-world objects and the relationships among them.
- Libraries of reusable classes are useful in many situations, but they also have some limitations. For example:
- Complexity. In a complex system, the class hierarchies for related classes can become extremely confusing, with many dozens or even hundreds of classes.
- Flow of control. A program written with the aid of class libraries is still responsible for the flow of control (i.e., it must control the interactions among all the objects created from a particular library). The programmer has to decide which functions to call at what times for which kinds of objects.
- Duplication of effort. Although class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way. Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i.e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way. Inevitably, similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.
Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.
Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.
The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts of the programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces of the program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still “sits on top of” the system.
Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
There are three main differences between frameworks and class libraries:
-
- Behavior versus protocol. Class libraries are essentially collections of behaviors that you can call when you want those individual behaviors in your program. A framework, on the other hand, provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
- Call versus override. With a class library, the code the programmer instantiates objects and calls their member functions. It's possible to instantiate and call objects in the same way with a framework (i.e., to treat the framework as a class library), but to take full advantage of a framework's reusable design, a programmer typically writes code that overrides and is called by the framework. The framework manages the flow of control among its objects. Writing a program involves dividing responsibilities among the various pieces of software that are called by the framework rather than specifying how the different pieces should work together.
- Implementation versus design. With class libraries, programmers reuse only implementations, whereas with frameworks, they reuse design. A framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain. For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.
Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-purpose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, “RFC 1866: Hypertext Markup Language—2.0” (November 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J. C. Mogul, “Hypertext Transfer Protocol—HTTP/1.1: HTTP Working Group Internet Draft” (May 2, 1996). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains. HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources. Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas:
-
- Poor performance;
- Restricted user interface capabilities;
- Can only produce static Web pages;
- Lack of interoperability with existing applications and data; and
- Inability to scale.
Sun Microsystem's Java language solves many of the client-side problems by:
-
- Improving performance on the client side;
- Enabling the creation of dynamic, real-time Web applications; and
- Providing the ability to create a wide variety of user interface components.
With Java, developers can create robust User Interface (UI) components. Custom “widgets” (e.g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
Sun's Java language has emerged as an industry-recognized language for “programming the Internet.” Sun defines Java as: “a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword-compliant, general-purpose programming language. Java supports programming for the Internet in the form of platform-independent Java applets.” Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add “interactive content” to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g., Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically, “C++ with extensions from Objective C for more dynamic method resolution.”
Another technology that provides similar function to JAVA is provided by Microsoft and ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies. The group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named “Jakarta.” ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention.
DETAILED DESCRIPTIONOne embodiment of the present invention is a server based framework utilizing component based architecture. Referring to
In general, the components of the present invention operate as shown in
Architecture Object
The Architecture Object 200 provides an easy-to-use object model that masks the complexity of the architecture on the client. The Architecture Object 200 provides purely technical services and does not contain any business logic or functional code. It is used on the client as the single point of access to all architecture services.
On the server side, the Architecture Object 200 is supplemented by a set of global functions contained in standard VB modules
The Architecture Object 200 is responsible for providing all client architecture services (i.e., codes table access, error logging, etc.), and a single point of entry for architecture services. The Architecture Object 200 is also responsible for allowing the architecture to exist as an autonomous unit, thus allowing internal changes to be made to the architecture with minimal impact to application.
The Architecture Object 200 provides a code manager, client profile, text manager, ID manager, registry manager, log manager, error manager, and a security manager. The codes manager reads codes from a local database on the client, marshals the codes into objects, and makes them available to the application. The client profile provides information about the current logged-in user. The text manager provides various text manipulation services such as search and replace. The ID manager generates unique IDs and timestamps. The registry manager encapsulates access to the system registry. The log manager writes error or informational messages to the message log. The error manager provides an easy way to save and re-raise an error. And the security manager determines whether or not the current user is authorized to perform certain actions.
Application Object
The Application Object 202 has a method to initiate each business operation in the application. It uses late binding to instantiate target UI controllers in order to provide autonomy between windows. This allows different controllers to use the Application Object 202 without statically linking to each and every UI controller in the application.
When opening a UI controller, the Application Object 202 calls the architecture initialization, class initialization, and form initialization member functions.
The Application Object 202 keeps a list of every active window, so that it can shut down the application in the event of an error. When a window closes, it tells the Application Object 202, and is removed from the Application Object's 202 list of active windows.
The Application Object 202 is responsible for instantiating each UI Controller 206, passing data/business context to the target UI Controller 206, and invoking standard services such as initialize controller, initializing Form and Initialize Architecture. The Application Object 202 also keeps track of which windows are active so that it can coordinate the shutdown process.
UI Form
The UI form's 204 primary responsibility is to forward important events to its controller 206. It remains mostly unintelligent and contains as little logic as possible. Most event handlers on the form simply delegate the work by calling methods on the form's controller 206.
The UI form 204 never enables or disables its own controls, but ask its controller 206 to do it instead. Logic is included on the UI form 204 only when it involves very simple field masking or minor visual details.
The UI form 204 presents an easy-to-use, graphical interface to the user and informs its controller 206 of important user actions. The UI form 204 may also provide basic data validation (e.g., data type validation) through input masking. In addition, the UI form is responsible for intelligently resizing itself, launching context-sensitive help, and unload itself.
User Interface Controller
Every UI Controller 206 includes a set of standard methods for initialization, enabling and disabling controls on its UI form 204, validating data on the form, getting data from the UI form 204, and unloading the UI form 204.
UI Controllers 206 contain the majority of logic to manipulate Business Objects 207 and manage the appearance of its UI form 204. If its form is not read-only, the UT Controller 206 also tracks whether or not data on the UI form 204 has changed, so as to avoid unnecessary database writes when the user decides to save. In addition, controllers of auxiliary windows (like the File-Save dialog box in Microsoft Word), keep track of their calling UT controller 206 so that they can notify it when they are ready to close.
A UI Controller 206 defines a Logical Unit of Work (LUW). If an LUW involves more than one UI Controller 206, the LUW is implemented as a separate object.
The UI Controller 206 is responsible for handling events generated by the user interacting with the UT form 204 and providing complex field validation and cross field validation within a Logical Unit of Work. The UI Controller 206 also contains the logic to interact with business objects 207, and creates new business objects 207 when necessary. Finally, the UI Controller 206 interacts with Client Component Adapters 208 to add, retrieve, modify, or delete business objects 207, and handles all client-side errors.
Business Objects
The Business Object's (BO) 207 primary functionality is to act as a data holder, allowing data to be shared across User Interface Controllers 206 using an object-based programming model.
BOs 207 perform validation on their attributes as they are being set to maintain the integrity of the information they contain. BOs 207 also expose methods other than accessors to manipulate their data, such as methods to change the life cycle state of a BO 207 or to derive the value of a calculated attribute.
In many cases, a BO 207 will have its own table in the database and its own window for viewing or editing operations.
Business Objects 207 contain information about a single business entity and maintain the integrity of that information. The BO 207 encapsulates business rules that pertain to that single business entity and maintains relationships with other business objects (e.g., a claim contains a collection of supplements). Finally, the BO 207 provides additional properties relating to the status of the information it contains (such as whether that information has changed or not), provides validation of new data when necessary, and calculates attributes that are derived from other attributes (such as Full Name, which is derived from First Name, Middle Initial, and Last Name).
Client Component Adapters
Client Component Adapters (CCAs) 208 are responsible for retrieving, adding, updating, and deleting business objects in the database. CCAs 208 hide the storage format and location of data from the UI controller 206. The UI controller 206 does not care about where or how objects are stored, since this is taken care of by the CCA 208.
The CCA 208 marshals data contained in recordsets returned by the server into business objects 207. CCAs 208 masks all remote requests from UI Controller 206 to a specific component, and act as a “hook” for services such as data compression, and data encryption.
COM Component Interface
A COM Component Interface (CCI) 210 is a “contract” for services provided by a component. By “implementing” an interface (CCI) 210, a component is promising to provide all the services defined by the CCI 20.
The CCI 210 is not a physical entity (which is why it is depicted with a dotted line). It's only reason for existence is to define the way a component appears to other objects. It includes the signatures or headers of all the public properties or methods that a component will provide.
To implement a CCI 210, a server component exposes a set of specially named methods, one for each method defined on the interface. These methods should do nothing except delegate the request to a private method on the component which will do the real work.
The CCI 210 defines a set of related services provided by a component. The CCI allows any component to “hide” behind the interface to perform the services defined by the interface by “implementing” the interface.
Server Component
Server components 222 are course grained and transaction oriented. They are designed for maximum efficiency.
Server Components 222 encapsulate all access to the database, and define business transaction boundaries. In addition, Server Components 222 are responsible for ensuring that business rules are honored during data access operations.
A Server Component 222 performs data access operations on behalf of CCAs 208 or other components and participates in transactions spanning server components 222 by communicating with other server components 222. The Server Component 222 is accessible by multiple front end personalities (e.g., Active Server Pages), and contains business logic designed to maintain the integrity of data in the database.
Overview
The distribution of business rules across tiers of the application directly affects the robustness and performance of the system as a whole. Business rules can be categorized into the following sections: Relationships, Calculations, and Business Events.
Relationships between Business Objects
Business Objects 207 are responsible for knowing other business objects 207 with which they are associated.
Relationships between BOs 207 are built by the CCA 208 during the marshaling process. For example, when a CCA 208 builds a claim BO 207, it will also build the collection of supplements if necessary.
Calculated Business Data
Business rules involving calculations based on business object 207 attributes are coded in the business objects 207 themselves. Participant Full Name is a good example of a calculated attribute. Rather than force the controllers to concatenate the first name, middle initial, and last name every time they wanted to display the full name, a calculated attribute that performs this logic is exposed on the business object. In this way, the code to compose the full name only has to be written once and can be used by many controllers 206.
Another example of a calculated attribute is the display date of a repeating task. When a task with a repeat rule is completed, a new display date must be determined. This display date is calculated based on the date the task was completed, and the frequency of repetition defined by the repeat rule. Putting the logic to compute the new display date into the Task BO 207 ensures that it is coded only once.
Responses to Business Events
Business rules that relate to system events and involve no user interaction are enforced on the server components.
Completion of a task is a major event in the system. When a task is completed, the system first ensures that the performer completing the task is added to the claim. Then, after the task is marked complete in the database, it is checked to see if the task has a repeat rule. If so, another task is created and added to the database. Finally, the event component is notified, because the Task Engine may need to react to the task completion.
Consider the scenario if the logic to enforce this rule were placed on the UI controller 206.
The controller 206 calls the Performer Component to see if the performer completing the task has been added to the claim. If the performer has not been added to the claim, then the controller 206 calls the performer component again to add them.
Next, the controller 206 calls the Task Component to mark the task complete in the database. If the task has a repeat rule, the controller 206 computes the date the task is to be redisplayed and calls the Task Component again to add a new task. Lastly, the controller 206 calls the Event Component to notify the Task Engine of the task completion.
The above implementation requires five network round trips in its worst case. In addition, any other controller 206 or server component 222 that wants to complete a task must code this logic all over again. Enforcing this rule in the task server component 222 reduces the number of network round trips and eliminates the need to code the logic more than once.
Responses to User Events
All responses to user events are coordinated by the controller 206. The controller 206 is responsible for actions such as enabling or disabling controls on its form, requesting authorization from the security component, or making calls to the CCA 208.
Authorization
All logic for granting authorization is encapsulated inside the security component. Controllers 206 and components 222 must ask the security component if the current user is authorized to execute certain business operations in the system. The security component will answer yes or no according to some predefined security logic.
Summary
The Default Window Framework provides default window processing for each window contained within the system. This default processing aides the developer in developing robust, maintainable UIs, standardizes common processes (such as form initialization) and facilitates smooth integration with architecture services.
The Window Processing Framework 300 encompasses the following:
Window Initialization 302;
Window Save Processing 304;
Window Control State Management 306;
Window Data Validation 308;
Window Shutdown Processing 310.
Window Initialization Processing 302: After creating a controller 206 for the desired window, the App object 202 calls a set of standard initialization functions on the controller 206 before the form 204 is displayed to the user. Standardizing these functions makes the UIs more homogeneous throughout the application, while promoting good functional decomposition.
Window Save Processing 304: Any time a user updates any form text or adds an item to a ListBox, the UI Controller 206 marks the form as “dirty”. This allows the UI controller 206 to determine whether data has changed when the form closes and prompt the user to commit or lose their changes.
Window Control State Management 306: Enabling and disabling controls and menu options is a very complex part of building a UI. The logic that modifies the state of controls is encapsulated in a single place for maintainability.
Window Data Validation 308: Whenever data changes on a form, validation rules can be broken. The controller is able to detect those changes, validate the data, and prompt the user to correct invalid entries.
Window Shutdown Processing 310: The Window Shutdown framework provides a clear termination path for each UI in the event of an error. This reduces the chance of memory leaks, and General Protection failures.
Benefits
Standardized Processing: Standardizing the window processing increases the homogeneity of the application. This ensures that all windows within the application behave in a consistent manner for the end users, making the application easier to use. It also shortens the learning curve for developers and increases maintainability, since all windows are coded in a consistent manner.
Simplified Development: Developers can leverage the best practices documented in the window processing framework to make effective design and coding decisions. In addition, a shell provides some “canned” code that gives developers a head start during the coding effort.
Layered Architecture: Because several architecture modules provide standardized processing to each application window, the core logic can be changed for every system window by simply making modifications to a single procedure.
Window Initialization 302To open a new window, the App Object 202 creates the target window's controller 206 and calls a series of methods on the controller 206 to initialize it. The calling of these methods, ArchInitClass, InitClass, InitForm, and ShowForm, is illustrated below.
ArchInitClass
The main purpose of the ArchInitClass function is to tell the target controller 206 who is calling it. The App Object 202 “does the introductions” by passing the target controller 206 a reference to itself and a reference to the calling controller 206. In addition, it serves as a hook into the controller 206 for adding architecture functionality in the future.
InitClass
This function provides a way for the App Object 202 to give the target controller 206 any data it needs to do its processing. It is at this point that the target controller 206 can determine what “mode” it is in. Typical form modes include, add mode, edit mode, and view mode. If the window is in add mode, it creates a new BO 207 of the appropriate type in this method.
InitForm
The InitForm procedure of each controller 206 coordinates any initialization of the form 204 before it is displayed. Because initialization is often a multi-step process, InitForm creates the window and then delegates the majority of the initialization logic to helper methods that each have a single purpose, in order to follow the rules of good functional decomposition. For example, the logic to determine a form's 204 state based on user actions and relevant security restrictions and move to that state is encapsulated in the DetermineFormState method.
PopulateForm
PopulateForm is a private method responsible for filling the form with data during initialization. It is called exactly once by the InitForm method. PopulateForm is used to fill combo boxes on a form 204, get the details of an object for an editing window, or display objects that have already been selected by the user, as in the following example.
ShowForm
The ShowForm method simply centers and displays the newly initialized form 204.
It is often necessary to enable or disable controls on a form 204 in response to user actions. This section describes the patterns employed by the Component Based Architecture for MTS (CBAM) to manage this process effectively.
Form Mode
It is helpful to distinguish between form mode and form state. Form mode indicates the reason the form 204 has been invoked. Often, forms 204 are used for more than one purpose. A common example is the use of the same form to view, add, and edit a particular type of object, such as a task or an insurance claim. In this case, the form's modes would include View, Add, and Update. The modes of a form 204 are also used to comply with security restrictions based on the current user's access level. For example, Task Library is a window that limits access to task templates based on the current user's role. It might have a Librarian mode and a Non-Librarian mode to reflect the fact that a non-librarian user cannot be allowed to edit task templates. In this way, modes help to enforce the requirement that certain controls on the form 204 remain disabled unless the user has a certain access level.
It is not always necessary for a form 204 to have a mode; a form might be so simple that it would have only one mode—the default mode. In this case, even though it is not immediately necessary, it may be beneficial to make the form “mode-aware” so that it can be easily extended should the need arise.
Form State
A form 204 will have a number of different states for each mode, where a state is a unique combination of enabled/disabled, visible/invisible controls. When a form 204 moves to a different state, at least one control is enabled or disabled or modified in some way.
A key difference between form mode and form state is that mode is determined when the controller 206 is initialized and remains constant until the controller 206 terminates. State is determined when the window initializes, but is constantly being reevaluated in response to user actions.
Handling UI Events
When the value of a control on the form 204 changes, it is necessary to reevaluate the state of the controls on the form (whether or not they are enabled/disabled or visible/invisible, etc.). If changing the value of one control could cause the state of a second control to change, an event handler is written for the appropriate event of the first control.
The following table lists common controls and the events that are triggered when their value changes.
The event handler calls the DetermineFormState method on the controller 206.
Setting the State of Controls
It is essential for maintainability that the process of setting the state of controls be separate from the process for setting the values of those controls. The DetermineFormState method on the controller 206 forces this separation between setting the state of controls and setting their values.
DetermineFormState is the only method that modifies the state of any of the controls on the form 204. Because control state requirements are so complex and vary so widely, this is the only restriction made by the architecture framework.
If necessary, parameters are passed to the DetermineFormState function to act as “hints” or “clues” for determining the new state of the form 204. For complex forms, it is helpful to decompose the DetermineFormState function into a number of helper functions, each handling a group of related controls on the form or moving the form 204 to a different state.
ExampleThe Edit/Add/View Task Window has three modes: Edit, Add, and View. In Add mode, everything on the form is editable. Some details will stay disabled when in Edit mode, since they should be set only once when the task is added. In both Add and Edit modes, the repeat rule may be edited. Enabling editing of the repeat rule always disables the manual editing of the task's due and display dates. In View mode, only the Category combo box and Private checkbox are enabled.
Window data validation is the process by which data on the window is examined for errors, inconsistencies, and proper formatting. It is important, for the sake of consistency, to implement this process similarly or identically in all windows of the application.
Types of Validation
Input Masking
Input masking is the first line of defense. It involves screening the data (usually character by character) as it is entered, to prevent the user from even entering invalid data. Input masking may be done programmatically or via a special masked text box, however the logic is always located on the form, and is invoked whenever a masked field changes.
Single-Field Range Checking
Single-field range checking determines the validity of the value of one field on the form by comparing it with a set of valid values. Single-field range checking may be done via a combo box, spin button, or programmatically on the form, and is invoked whenever the range-checked field changes.
Cross-Field Validation
Cross-field validation compares the values of two or more fields to determine if a validation rule is met or broken, and occurs just before saving (or searching). Cross-field validation may be done on the Controller 206 or the Business Object 207, however it is preferable to place the logic on the Business Object 207 when the validation logic can be shared by multiple Controllers 206.
Invalid data is caught and rejected as early as possible during the input process. Input masking and range checking provide the first line of defense, followed by cross-field validation when the window saves (or searches).
Single-Field Validation
All single-field validation is accomplished via some sort of input masking. Masks that are attached to textboxes are used to validate the type or format of data being entered. Combo boxes and spin buttons may also be used to limit the user to valid choices. If neither of these are sufficient, a small amount of logic may be placed on the form's event handler to perform the masking functionality, such as keeping a value below a certain threshold or keeping apostrophes out of a textbox.
Cross-Field Validation
When the user clicks OK or Save, the form calls the IsFormDataValid on the controller to perform cross-field validation (e.g., verifying that a start date is less than an end date). If the business object 207 contains validation rules, the controller 206 may call a method on the business object 207 to make sure those rules are not violated.
If invalid data is detected by the controller 206, it will notify the user with a message box and, if possible, the indicate which field or fields are in error. Under no circumstances will the window perform validation when the user is trying to cancel.
Example
Window “Save Processing” involves tracking changes to data on a form 204 and responding to save and cancel events initiated by the user.
Tracking Changes to Form Data
Each window within the CBAM application contains a field within its corresponding control object known as the dirty flag. The dirty flag is set to True whenever an end user modifies data within the window. This field is interrogated by the UI Controller 206 to determine when a user should be prompted on Cancel or if a remote procedure should be invoked upon window close.
The application shell provides standard processing for each window containing an OK or Save button.
Saving
The default Save processing is implemented within the UI Controller 206 as follows:
The UI Controller is Notified that the OK button has been clicked. Then the controller 206 checks its Dirty Flag. If flag is dirty, the controller 206 calls the InterrogateForm method to retrieve data from the form 204 and calls a server component 222 to store the business object 207 in the database. If the Dirty Flag is not set, then no save is necessary. The window is then closed.
Canceling
When the user cancels a window, the UI Controller 206 immediately examines the Dirty Flag. If the flag is set to true, the user is prompted that their changes will be lost if they decide to close the window.
Once prompted, the user can elect to continue to close the window and lose their changes or decide not to close and continue working.
Window Shutdown Processing 310In the event of an error, it is sometimes necessary to shutdown a window or to terminate the entire application. It is critical that all windows follow the shutdown process in order to avoid the GPFs commonly associated with terminating incorrectly. Following is how the window/application is shutdown.
Shutdown Scope
The scope of the shutdown is as small as possible. If an error occurs in a controller 206 that does not affect the rest of the application, only that window is shut down. If an error occurs that threatens the entire application, there is a way to quickly close every open window in the application. The window shutdown strategy is able to accommodate both types of shutdowns.
Shutdown
In order to know what windows must be shut down, the architecture tracks which windows are open. Whenever the App Object 202 creates a controller 206, it calls its RegCTLR function to add the controller 206 to a collection of open controllers. Likewise, whenever a window closes, it tells the App Object 202 that it is closing by calling the App Object's 202 UnRegCTLR function, and the App Object 202 removes the closing controller 206 from its collection. In the case of an error, the App Object 202 loops through its collection of open controllers, telling each controller to “quiesce” or shutdown immediately.
GeneralErrorHandler
The GeneralErrorHandler is a method in MArch.bas that acts as the point of entry into the architecture's error handling mechanism. A component or a controller will call the GeneralErrorHandler when they encounter any type of unexpected or unknown error. The general error handler will return a value indicating what the component or controller should do: (1) resume on the line that triggered the error (2) resume on the statement after the line that triggered the error (3) exit the function (4) quiesce (5) shutdown the entire application.
In order to prevent recursive calls the GeneralErrorHandler keeps a collection of controllers that are in the process of shutting down. If it is called twice in a row by the same controller 206, it is able to detect and short-circuit the loop. When the controller 206 finally does terminate, it calls the UnRegisterError function to let the GeneralErrorHandler know that it has shut down and removed from the collection of controllers.
Shutdown Process
After being told what to do by the GeneralErrorHandler, the controller 206 in error may try to execute the statement that caused the error, proceed as if nothing happened, exit the current function, call its Quiesce function to shut itself down, or call the Shutdown method on the App Object 202 to shut the entire application down.
Additional Standard MethodsSearching
Controllers 206 that manage search windows have a public method named Find<Noun>s where <Noun> is the type of object being searched for. This method is called in the event handler for the Find Now button.
Saving
Any controller 206 that manages an edit window has a public method called Save that saves changes the user makes to the data on the form 204. This method is called by the event handlers for both the Save and OK buttons (when/if the OK button needs to save changes before closing).
Closing
A VB window is closed by the user in several ways: via the control-box in upper left corner, the X button in upper right corner, or the Close button. When the form closes, the only method that will always be called, regardless of the way in which the close was initiated, is the form's 204 QueryUnload event handler.
Because of this, there cannot be a standard Close method. Any processing that must occur when a window closes is to be done in the QueryUnload method on the controller 206 (which is called by the form's QueryUnload event handler).
The VB statement, Unload Me, appears in the Close button's event handler to manually initiate the unloading process. In this way, the Close button mimics the functionality of the control box and the X button, so that the closing process is handled the same way every time, regardless of how the user triggered the close. The OK button's event handler also executes the Unload Me statement, but calls the Save method on the controller first to save any pending changes.
Business ObjectsBusiness Objects 207 are responsible for containing data, maintaining the integrity of that data, and exposing functions that make the data easy to manipulate. Whenever logic pertains to a single BO 207 it is a candidate to be placed on that BO. This ensures that it will not be coded once for each controller 206 that needs it. Following are some standard examples of business object logic.
Business Logic: Managing Life Cycle StateOverview
The “state” of a business object 207 is the set of all its attributes. Life cycle state refers only to a single attribute (or a small group of attributes) that determine where the BO 207 is in its life cycle. For example, the life cycle states of a Task are Open, Completed, Cleared, or Error. Business objectives usually involve moving a BO toward its final state (i.e., Completed for a Task, Closed for a Supplement, etc.).
Often, there are restrictions on a BO's movement through its life cycle. For example, a Task may only move to the Error state after first being Completed or Cleared. BOs provide a mechanism to ensure that they do not violate life cycle restrictions when they move from state to state.
Approach
A BO 207 has a method to move to each one of its different life cycle states. Rather than simply exposing a public variable containing the life cycle state of the task, the BO exposes methods, such as Task.Clear( ), Task.Complete( ) and Task.MarkInError( ) that move the task a new state. This approach prevents the task from containing an invalid value for life cycle state, and makes it obvious what the life cycle states of a task are.
Example
Overview
Sometimes, a BO 207 acts as a container for a group of other BOs. This happens when performing operations involving multiple BOs. For example, to close, an insurance claim ensures that it has no open supplements or tasks. There might be a method on the insurance claim BO—CanClose( )—that evaluates the business rules restricting the closing of a claim and return true or false. Another situation might involve retrieving the open tasks for a claim. The claim can loop through its collection of tasks, asking each task if it is open and, if so, adding it to a temporary collection which is returned to the caller.
Example
Overview
When a BO 207 is added or updated, it sends all of its attributes down to a server component 222 to write to the database. Instead of explicitly referring to each attribute in the parameter list of the functions on the CCA 208 and server component 222, all the attributes are sent in a single variant array. This array is also known as a structure.
Approach
Each editable BO 207 has a method named AsStruct that takes the object's member variables and puts them in a variant array. The CCA 208 calls this method on a BO 207 before it sends the BO 207 down to the server component 222 to be added or updated. The reason that this is necessary is that, although object references can be passed by value over the network, the objects themselves cannot. Only basic data types like Integer and String can be sent by value to a server component 222. A VB enumeration is used to name the slots of the structure, so that the server component 222 can use a symbolic name to access elements in the array instead of an index. Note that this is generally used only when performing adds or full updates on a business object 207.
In a few cases, there is a reason to re-instantiate the BO 207 on the server side. The FromStruct method does exactly the opposite of the AsStruct method and initializes the BO 207 from a variant array. The size of the structure passed as a parameter to FromStruct is checked to increase the certainty that it is a valid structure.
When a BO 207 contains a reference to another BO 207, the AsStruct method stores the primary kev of the referenced BO 207. For example, the Task structure contains a PerformerId, not the performer BO 207 that is referenced by the task. When the FromStruct method encounters the PerformerId in the task structure, it instantiates a new performer BO and fills in the ID, leaving the rest of the performer BO empty.
Example
Overview
Often a copy of a business object 207 is made. Cloning is a way to implement this kind of functionality by encapsulating the copying process in the BO 207 itself. Controllers 206 that need to make tentative changes to a business object 207 simply ask the original BO 207 for a clone and make changes to the clone. If the user decides to save the changes, the controller 206 ask the original BO to update itself from the changes made to the clone.
Approach
Each BO 207 has a Clone method to return a shallow copy of itself. A shallow copy is a copy that doesn't include copies of the other objects that the BO 207 refers to, but only a copy of a reference to those objects. For example, to clone a task, it does not give the clone a brand new claim object; it gives the clone a new reference to the existing claim object. Collections are the only exception to this rule—they are always copied completely since they contain references to other BOs.
Each BO 207 also has an UpdateFromClone method to allow it “merge” a clone back in to itself by changing its attributes to match the changes made to the clone.
Example
Overview
BOs 207 occasionally are filled only half-full for performance reasons. This is done for queries involving multiple tables that return large data sets. Using half-baked BOs 207 can be an error prone process, so it is essential that the half-baking of BOs are carefully managed and contained.
In most applications, there are two kinds of windows—search windows and edit/detail windows. Search windows are the only windows that half-bake BOs 207. Generally, half-baking only is a problem when a detail window expecting a fully-baked BO receives a half-baked BO from a search window.
Approach
Detail windows refresh the BOs 207 they are passed by the search windows, regardless of whether or not they were already fully-baked. This addresses the problems associated with passing half-baked BOs and also helps ensure that the BO 207 is up-to-date.
This approach requires another type of method (besides Get, Add, Update, and Delete) on the CCA 208: a Refresh method. This method is very similar to a Get method (in fact, it calls the same method on the server component) but is unique because it refreshes the data in objects that are already created. The detail window's controller 206 calls the appropriate CCA 208 passing the BO 207 to be refreshed, and may assume that, when control returns from the CCA 208, the BO 207 will be up-to-date and fully-baked.
This is may not be necessary if two windows are very closely related. If the first window is the only window that ever opens the second, it is necessary for the second window to refresh the BO 207 passed by the first window if it knows that the BO 207 is baked fully enough to be used.
CCAsCCAs 208 are responsible for transforming data from row and columns in a recordset to business objects 207, and for executing calls to server components 222 on behalf of controllers 206.
Retrieving Business ObjectsOverview
After asking a component to retrieve data, the CCA 208 marshals the data returned by the component into business objects 207 that are used by the UI Controller 206.
Approach
The marshaling process is as follows:
CCAs 208 call GetRows on the recordset to get a copy of its data in a variant array in order to release the recordset as soon as possible. A method exist to coordinate the marshaling of each recordset returned by the component.
Only one recordset is coordinated in the marshaling process of a single method. A method exist to build a BO from a single row of a recordset. This method is called once for each row in the recordset by the marshaling coordination method.
Example
Overview
The logic to refresh BOs 207 is very similar to the logic to create them in the first place. A “refresh” method is very similar to a “get” method, but must use BOs 207 that already exist when carrying out the marshalling process.
Example
Overview
Controllers 206 are responsible for creating and populating new BOs 207. To add a BO 207 to the database, the controller 206 must call the CCA 208, passing the business object 207 to be added. The CCA 208 calls the AsStruct method on the BO 207, and pass the BO structure down to the component to be saved. It then updates the BO 207 with the ID and timestamp generated by the server. Note the method on the CCA 208 just updates the BO 207.
Example
Overview
The update process is very similar to the add process. The only difference is that the server component only returns a timestamp, since the BO already has an ID.
Example
Deleting Overview
Like the add and the update methods, delete methods take a business object 207 as a parameter and do not have a return value. The delete method does not modify the object 207 it is deleting since that object will soon be discarded.
Example
Server components 222 have two purposes: enforcing business rules and carrying out data access operations. They are designed to avoid duplicating logic between functions.
Designing for ReuseEnforcing Encapsulation
Each server component 222 encapsulates a single database table or a set of closely related database tables. As much as possible, server components 222 select or modify data from a single table. A component occasionally selects from a table that is “owned” or encapsulated by another component in order to use a join (for efficiency reasons). A server component 222 often collaborates with other server components to complete a business transaction.
Portioning Logic between Multiple Classes
If the component becomes very large, it is split into more than one class. When this occurs, it is divided into two classes—one for business rules and one for data access. The business rules class implements the component's interface and utilizes the data access class to modify data as needed.
Example
With the exception of “Class_Initialize”, “Class_Terminate”, and methods called within an error handler, every function or subroutine has a user defined ‘On Error GoTo’ statement. The first line in each procedure is: On Error GoTo ErrorHandler. A line near the end of the procedure is given a label “ErrorHandler”. (Note that because line labels in VB 5.0 have procedure scope, each procedure can have a line labeled “ErrorHandler”). The ErrorHandler label is preceded by a Exit Sub or Exit Function statement to avoid executing the error handling code when there is no error.
Errors are handled differently based on the module's level within the application (i.e., user interface modules are responsible for displaying error messages to the user).
All modules take advantage of technical architecture to log messages. Client modules that already have a reference to the architecture call the Log Manager object directly. Because server modules do not usually have a reference to the architecture, they use the LogMessage( ) global function complied into each server component.
Any errors that are raised within a server component 222 are handled by the calling UI controller 206. This ensures that the user is appropriately notified of the error and that business errors are not translated to unhandled fatal errors.
All unexpected errors are handled by a general error handler function at the global Architecture module in order to always gracefully shut-down the application.
Server Component ErrorsThe error handler for each service module contains a Case statement to check for all anticipated errors. If the error is not a recoverable error, the logic to handle it is first tell MTS about the error by calling GetObjectContext.SetAbort( ). Next, the global LogMessage( ) function is called to log the short description intended for level one support personnel. Then the LogMessage( ) function is called a second time to log the detailed description of the error for upper level support personnel. Finally, the error is re-raised, so that the calling function will know the operation failed.
A default Case condition is coded to handle any unexpected errors. This logs the VB generated error then raises it. A code sample is provided below:
Following is an example of how error handling in the task component is implemented when an attempt is made to reassign a task to a performer that doesn't exist. Executing SQL to reassign a task to a non-existent performer generates a referential integrity violation error, which is trapped in this error handler:
All CCI's, CCA's, Business Objects, and Forms raise any error that is generated. A code sample is provided below:
The user interface controllers 206 handle any errors generated and passed up from the lower levels of the application. UI modules are responsible for handling whatever errors might be raised by server components 222 by displaying a message box to the user.
Any error generated in the UI's is also displayed to the user in a dialog box. Any error initiated on the client is logged using the LogMessage( ) procedure. Errors initiated on the server will already have been logged and therefore do not need to be logged again.
All unexpected errors are trapped by a general error method at the global architecture module. Depending on the value returned from this function, the controller may resume on the statement that triggered the error, resume on the next statement, call its Quiesce function to shut itself down, or call a Shutdown method on the application object to shutdown the entire application.
No errors are raised from this level of the application, since controllers handle all errors. A code sample of a controller error handler is provided below:
The CBAM application is constructed so that it can be localized for different languages and countries with a minimum effort or conversion.
Requirements and ScopeThe CBAM architecture provides support for certain localization features:
Localizable Resource Repository;
Flexible User Interface Design;
Date Format Localization; and
Exposure of Windows Operation System Localization Features.
Localization Approach Checklist
The CBAM application has an infrastructure to support multiple languages. The architecture acts as a centralized literals repository via its Codes Table Approach.
The Codes Tables have localization in mind. Each row in the codes table contains an associated language identifier. Via the language identifier, any given code can support values of any language.
Flexible Interface 400Flexible user interface 400 and code makes customization easy. The
Generic graphics are used and overcrowding is avoided to create a user interface which is easy to localize.
Data LocalizationLanguage localization settings affect the way dates are displayed on UI's (user interfaces). The default system display format is different for different Language/Countries. For Example:
-
- English (United States) displays “mm/dd/yy” (e.g., “5/16/98”)
- English (United Kingdom) displays “dd/mm/yy” (e.g., “16/5/98”).
The present inventions UI's employ a number of third-party date controls including Sheridan Calendar Widgets (from Sheridan Software) which allow developers to set predefined input masks for dates (via the controls' Property Pages; the property in this case is “Mask”).
Although the Mask property can be manipulated, the default setting is preferably accepted (the default setting for Mask is “0—System Default”; it is set at design time). Accepting the default system settings eliminates the need to code for multiple locales (with some possible exceptions), does not interfere with intrinsic Visual Basic functions such as DateAdd, and allows dates to be formatted as strings for use in SQL.
The test program illustrated below shows how a date using the English (United Kingdom) default system date format is reformatted to a user-defined format (in this case, a string constant for use with DB2 SQL statements):
The CBAM architecture exposes interface methods on the RegistryService object to access locale specific values which are set from the control panel.
The architecture exposes an API from the RegistryService object which allows access to all of the information available in the control panel. Shown below is the signature of the API:
-
- GetRegionalInfo (Info As RegionalInfo) As String
- Where RegionalInfo can be any of the values in the table below:
- GetRegionalInfo (Info As RegionalInfo) As String
The Logical Unit of Work (LUW) pattern enables separation of concern between UI Controllers 206 and business logic.
Overview
Normally, when a user opens a window, makes changes, and clicks OK or Save, a server component 222 is called to execute a transaction that will save the user's changes to the database. Because of this, it can be said that the window defines the boundary of the transaction, since the transaction is committed when the window closes.
The LUW pattern is useful when database transactions span windows. For example, a user begins editing data on one window and then, without saving, opens another window and begins editing data on that window, the save process involves multiple windows. Neither window controller 206 can manage the saving process, since data from both windows must be saved as an part of an indivisible unit of work. Instead, a LUW object is introduced to manage the saving process.
The LUW acts as a sort of “shopping bag”. When a controller 206 modifies a business object 207, it puts it in the bag to be paid for (saved) later. It might give the bag to another controller 206 to finish the shopping (modify more objects), and then to a third controller who pays (asks the LUW to initiate the save).
Approach
Controllers 206 may have different levels of LUW “awareness”:
Requires New: always creates a new LUW;
Requires: requires an LUW, and creates a new LUW only if one is not passed by the calling controller;
Requires Existing: requires an LUW, but does not create a new LUW if one is not passed by the calling controller. Raises an error if no LUW is passed; and
Not Supported: is not capable of using an LUW.
Controllers 206 that always require a new LUW create that LUW in their ArchInitClass function during initialization. They may choose whether or not to involve other windows in their LUW. If it is desirable for another window to be involved in an existing LUW, the controller 206 that owns the LUW passes a reference to that LUW when it calls the App Object 202 to open the second window. Controllers 206 that require an LUW or require an existing LUW accept the LUW as a parameter in the ArchInitClass function.
LUWs contain all the necessary logic to persist their “contents”—the modified BOs 207. They handle calling methods on the CCA 208 and updating the BOs 207 with new IDs and/or timestamps.
Architecture API HierarchyFollowing is an overview of the architecture object model, including a description of each method and the parameters it accepts. Additional sections address the concepts behind specific areas (code caching, message logging, and data access) in more detail.
Arch ObjectThe following are APIs located on the Arch Object 200 which return either a retrieved or created instance of an object which implements the following interfaces:
CodesMan( ) 500;
TextMan( ) 502;
IdMan( ) 504;
RegMan( ) 506;
LogMan( ) 508;
ErrMan( ) 510;
UserMan( ) 512; and
SecurityMan( ) 514.
AsMsgStruct( )
This method on the Arch Object returns a variant structure to pass along a remote message.
Syntax:
Example:
The following are APIs located on the interface of the Arch Object 200 named CodesMan 500:
CheckCacheFreshness( )
Checks whether the cache has expired, if so refresh.
Syntax:
Example:
-
- CheckCacheFreshness
FillControl( )
This API is used to fill listboxes or comboboxes with values from a list of CodeDecodes. Returns a collection for subsequent lookups to Code objects used to fill controls.
Syntax:
Parameters:
Example:
FilterCodes( )
Returns a collection of code/decodes that are filtered using their effective and expiration dates based on which nCodeStatus is passed from the fillcontrol method.
Syntax:
Parameters:
Example:
-
- Set colFilteredCodes=FilterCodes(colCodes, nCodeStatus)
GetCategoryCodes( )
Returns a collection of CCode objects given a valid category
Syntax:
Parameters:
Example:
GetCodeObject( )
Returns a valid CCode object given a specific category and code.
Syntax:
Parameters:
Example:
GetResourceString( )
Returns a string from the resource file given a specific string ID.
Syntax:
Parameters:
-
- lStringId: The id associated with the string in the resource file.
Example:
-
- smsg=arch.CodesMan.GetResourceString(CLng(vMessage))
GetServerDate( )
Returns the date from the server.
Syntax:
Example:
-
- SetServerDate CCA.GetServerDate
RefreshCache( )
Refreshes all of the code objects in the cache.
Syntax:
Example:
-
- m_Cache.RefreshCache
RemoveValidCodes( )
Removes all valid codes from the passed in assigned codes collection, which is used to see which codes are assigned and not valid.
Syntax:
Parameters:
Example:
RemoveValidCodes codCode.Code, colPassedInAssignedCodes
SetServerDate( )
Sets the server date.
Syntax:
Parameters:
-
- dtServerDate: Date of Server.
Example:
-
- SetServerDate CCA.GetServerDate
The following are APIs located on the interface of the Arch Object 200 named TextMan 502.
PairUpAposts( );
PairUpAmps( ); and
MergeParms( ).
PairUpAposts( )
Pairs up apostrophes in the passed string.
Syntax:
Parameters:
-
- sOriginalString: string passed in by the caller
Example:
PairUpAmps( )
Pairs up ampersands in the passed string.
Syntax:
Parameters:
-
- sOriginalString: string passed in by the caller
Example:
MergeParms( )
Merges string with the passed parameters collection.
Syntax:
Parameters:
Example:
The following are APIs located on the interface of the Arch Object 200 named IdMan 504:
GetGUID( );
GetSequenceID( );
GetTimeStamp( );
GetTrackingNbr( ); and
GetUniqueId( ).
GetGUID( )
Syntax:
Example:
GetSequenceId( )
Syntax:
Parameters:
Example:
-
- frmCurrentForm.txtTemplateNumber=objArch.IdMan.GetSequenceId(cmCountFC)
GetTimeStamp( )
Syntax:
Example:
GetTrackingNbr( )
Syntax:
Example:
GetUniqueId( )
Syntax:
Example:
The following are APIs located on the interface of the Arch Object 200 named RegMan 506:
GetCacheLife( );
GetClientDSN( );
GetComputerName( );
GetDefaultAndValidate( );
GetFCArchiveDirectory( );
GetFCDistributionDirectory( );
GetFCMasterDirectory( );
GetFCUserDirectory( );
GetFCWorkingDirectory( );
GetHelpPath( );
GetLocalInfo( );
GetLogLevel( );
GetRegionalInfo( );
GetRegValue( );
GetServerDSN( );
GetSetting( );
GetTimerLogLevel( );
GetTimerLogPath( ); and
GetUseLocalCodes( ).
GetCacheLife( )
Syntax:
Example:
GetClientDSN( )
Syntax:
Example:
GetComputerName( )
Syntax:
Example:
GetDefaultAndValidate( )
Syntax:
Parameters:
Example:
GetFCArchiveDirectory( )
Syntax:
Example:
GetFCDistributionDirectory( )
Syntax:
Example:
GetFCMasterDirectory( )
Syntax:
Example:
GetFCUserDirectory( )
Syntax:
Example:
GetFCWorkingDirectory( )
Syntax:
Example:
GetHelpPath( )
Syntax:
Example:
GetLocalInfo( )
Syntax:
Example:
GetLogLevel( )
Syntax:
Example:
GetRegionalInfo( )
Allows access to all locale specific values which are set from control panel.
Syntax:
Parameters:
-
- Info: string containing the regional information. Several of the valid constants include:
Example:
GetRegValue( )
Syntax:
Example:
GetServerDSN( )
Syntax:
Example:
GetSetting( )
Get setting from the registry.
Syntax:
Parameters:
Parameters:
-
- GetHelpPath=GetSetting(cmRegHelpPathKey)
GetTimerLogLevel( )
Syntax:
Example:
GetTimerLogPath( )
Syntax:
Example:
GetUseLocalCodes( )
Syntax:
Example:
LPSTRToVBString( )
Extracts a VB string from a buffer containing a null terminated string.
Syntax:
The following are APIs located on the interface of the Arch Object 200 named LogMan 508:
LogMessage( );
WriteToDatabase( ); and
WriteToLocalLog( ).
LogMessage( )
Used to log the message. This function will determine where the message should be logged, if at all, based on its severity and the vMsg's log level.
Syntax:
Parameters:
Example:
WriteToDatabase( )
Used to log the message to the database on the server using the CLoggingComp. This function returns the TrackingId that is generated by the CLoggingObject.
Syntax:
Parameters:
Example:
WriteToLocalLog( )
Used to log the message to either a flat file, in the case of Windows 95, or the NT Event Log, in the case of Windows NT.
Syntax:
Parameters:
-
- msgToLog: a parameter containing the text of the message.
Example:
The following are APIs located on the interface of the Arch Object 200 named ErrMan 510:
HandleError( );
RaiseOriginal( );
ResetError( ); and
Update( ).
HandleError( )
This method is passed through to the general error handler in MArch.bas
Syntax:
Parameters:
RaiseOriginal( )
This method is used to Reset the error object and raise. Syntax:
Example:
-
- objArch.ErrMan.RaiseOriginal
ResetError( )
This method is used to reset attributes.
Syntax:
Example:
-
- objArch.ErrMan.ResetError
Update( )
This method is used to update attributes to the values of VBs global Error object.
Syntax:
Example:
-
- objArch.ErrMan.Update
The following are APIs located on the interface of the Arch Object 200 named UserMan 512.
UserId;
EmployeeId;
EmployeeName;
EmployeeFirstName;
EmployeeLastName;
EmployeeMiddleInitial;
GetAuthorizedEmployees;
IsSuperOf( );
IsRelativeOf( ); and
IsInRole( ).
UserId( )
Syntax:
Example:
EmployeeId( )
Syntax:
Example:
EmployeeName( )
Syntax:
Example:
EmployeeFirstName( )
Syntax:
Example:
EmployeeLastName( )
Syntax:
Example:
EmployeeMiddleInitial( )
Syntax:
Example:
GetAuthorizedEmployees( )
Creates a collection of user's supervisees from the dictionary and returns GetAuthorizedEmployees—collection of authorized employees
Syntax:
Example:
IsSuperOf( )
Checks if the current user is supervisor of the passed in user.
Syntax:
Parameters:
-
- sEmpId: string containing Employee ID number
Example:
IsRelativeOf( )
Checks if the passed in user is relative of the current user.
Syntax:
Parameters:
-
- sEmpId: string containing Employee ID number
Example:
IsInRole( )
Checks to see if the current user is in a certain role.
Syntax:
Parameters:
-
- sRole: string containing role
Example:
The following APIs are located on the interface of the Arch Object 200 named SecurityMan 514.
EvalClaimRules;
EvalFileNoteRules;
EvalFormsCorrRules;
EvalOrgRules;
EvalRunApplicationRules;
EvalRunEventProcRules;
EvalTaskTemplateRules;
EvalUserProfilesRules;
IsOperAuthorized;
GetUserId; and
OverrideUser.
EvalClaimRules( )
This API references business rules for Claim security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalFileNoteRules( )
This API references business rules for FileNote security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalFormsCorrRules( )
This API references business rules for Forms and Corr security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalOrgRules( )
This API references business rules for Event Processor security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalRunApplicationRules( )
This API references business rules for running the application and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalRunEventProcRules( )
This API references business rules for Event Processor security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalTaskTemplateRules( )
This API references business rules for Task Template security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
EvalUserProfileRules( )
This API references business rules for Task Template security checking and returns a boolean if rules are met.
Syntax:
Parameters:
Example:
GetUserId( )
Returns the login name/user id of the current user.
Syntax:
Example:
IsOperAuthorized( )
This API references business rules and returns a boolean determining whether the user has security privileges to perform a certain operation.
Syntax:
Parameters:
Example:
OverrideUser( )
Re-initializes for a different user.
Syntax:
Parameters:
Example:
Separate tables (CodesDecodes) are Created for storing the static values.
Only the references to codes/decodes are stored in business tables (e.g., Task) which utilize these values. This minimizes the size of the business tables, since storing a Code value takes much less storage space than its corresponding Decode value (e.g., For State, “AL” is stored in each table row instead of the string “Alabama”).
CodeDecodes are stored locally on the client workstation in a local DBMS. On Application startup, a procedure to ensure the local tables are in sync with the central DBMS is performed.
Infrastructure ApproachThe present invention's Code Decode Infrastructure 600 Approach outlines the method of physically modeling codes tables. The model allows codes to be extended with no impact to the physical data model and/or application and architecture.
Infrastructure
The physical model of the CodeDecode infrastructure 600 does the following:
Supports relational functionality between CodeDecode objects;
Supports extensibility without modification to the DBMS or Application Architecture;
Provides a consistent approach for accessing all CodeDecode elements; and
Is easily maintainable.
These generic tables are able to handle new categories, and modification of relationships without a need to change the DBMS or CodeDecode Application Architecture.
Benefits of this model are extensibility and maintainability. This model allows for the modifications of code categories without any impact to the DBMS or the Application Architecture code. This model also requires fewer tables to maintain. In addition, only one method is necessary to access CodeDecodes.
Table Relationships and Field Descriptions:
-
- (pk) indicates a Primary Key
Code_Category 602
-
- C_Category (pk): The category number for a group of codes
- C_Cache (currently not utilized): Can indicate whether the category should be cached in memory on the client machine
- T_Category: A text description of the category (e.g., Application Task Types, Claim Status, Days of Week)
- D_Last_Update: The date any data within the given category was last updated; this field is used in determining whether to update a category or categories on the local data base
- Relationships
- A one-to-many relationship with the table Code (i.e., one category can have multiple codes)
Code 604
-
- C_Category (pk): The category number for a group of codes
- C_Code (pk): A brief code identifier (up to ten characters; the current maximum length being used is five characters)
- D_Effective: A date field indicating the code's effective date
- D_Expiration: A date field indicating the code's expiration date (the default is Jan. 1, 2999)
- Relationships
- A many-to-one relationship with Code_Category 602 (described above)
- A one-to-many relationship with Code_Relations 606 (a given category-and-code combination can be related to multiple other category-and-code combinations)
Code_Relations 606
-
- C_Category1 (pk): The first category
- C_Code1 (pk): The first code
- C_Category2 (pk): The related category
- C_Code2 (pk): The related code
- Relationships
- A many-to-one relationship with the Code table (each category and code in the Code table can have multiple related category-code combinations)
Code_Decode 608
-
- C_Category (pk): The category number for a group of codes
- C_Code (pk): A brief code identifier (up to ten characters; the current maximum length being used is five characters)
- N_Lang_ID (pk): A value indicating the local language setting (as defined in a given machine's Regional Settings). For example, the value for English (United States) is stored as 0409. Use of this setting allows for the storage and selection of text code descriptions based on the language chosen
- T_Short_Desc: An abbreviated textual description of C_Code
- T_Long_Desc: A full-length textual description of C_Code—what the user will actually see (e.g., Close Supplement—Recovery, File Note, Workers Compensation)
Enabling Localization
Codes have support for multiple languages. The key to this feature is storing a language identifier along with each CodeDecode value. This Language field makes up a part of the compound key of the Code_Decode table. Each Code API lookup includes a system level call to retrieve the Language system variable. This value is used as part of the call to retrieve the values given the correct language.
Maintaining Language Localization Setting
A link to the Language system environment variable to the language keys is stored on each CodeDecode. This value is modified at any time by the user simply by editing the regional settings User Interface available in the Microsoft Windows Control Panel folder.
Codes Expiration ApproachHandling Time Sensitive Codes becomes an issue when filling controls with a list of values. One objective is to only allow the user to view and select appropriate entries. The challenge lies in being able to expire Codes without adversely affecting the application. To achieve this, consideration is given to how each UI will decide which values are appropriate to show to the user given its current mode.
The three most common UI modes that affect time sensitive codes are Add Mode, View Mode, and Edit Mode.
Add Mode
In Add Mode, typically only valid codes are displayed to the user as selection options. Note that the constant, cmValidCodes, is the default and will still work the same even when this optional parameter is omitted.
-
- Set colStates=objArch.CodesMan.FillControl(frmCurrentForm.cboStates, cmCatStates, cmLongDecode, cmValidCodes)
View Mode
In View Mode, the user is typically viewing results of historical data without direct ability to edit. Editing selected historical data launches another UI. Given this the controls are filled with valid and expired codes, or in other words, non-pending codes.
-
- Set colStates=objArch.CodesMan.FillControl(frmCurrentForm.cboStates, cmCatStates, cmLongDecode, cmNonPendingCodes)
Edit Mode
In Edit Mode, changes are allowed to valid codes but also expired codes are displayed if already assigned to the entity.
-
- Dim colAssignedCodes As New cCollection
- colAssignedCodes.Add HistoricalAddress.State
- Set colStates=objArch.CodesMan.FillControl(frmCurrentForm.cboStates, cmCatStates, cmLongDecode, cmValidCodes, colAssignedCodes)
The Local CodeDecode tables are kept in sync with central storage of CodeDecodes. The architecture is responsible for making a check to see if there are any new or updated code decodes from the server on a regular basis. The architecture also, upon detection of new or modified CodeDecode categories, returns the associated data, and performs an update to the local database.
After an API call, a check is made to determine if the Arch is initialized 702. If it is a check is made to determine if the Freshness Interval has expired 704. If the Freshness Interval has not expired, the API call is complete 706. However, if either the Arch is not initialized or the Freshness Interval has expired, then the “LastUpdate” fields for each category are read from the CodeDecode and passed to the server 708. Then new and updated categories are read from the database 710. Finally the Local database is updated 712.
Code Access APIsThe following are APIs located on the interface of the Arch Object 200 named CodesMan 500.
GetCodeObject(nCategory, sCode);
GetCategoryCodes(nCategory);
FillControl(ctlControl, nCategory, nFillType, [nCodeStatus], [colAssignedCodes]).
GetCodeObject: Returns a valid CCode object given a specific category and code.
Syntax:
-
- GetCodeObject(nCategory, sCode)
Parameters:
-
- nCategory: The integer based constant which classified these CodeDecodes from others.
- sCode: A string indicating the Code attribute of the CodeDecode object.
Example:
GetCategoryCodes: Returns a collection of CCode objects given a valid category
Syntax:
-
- GetCategoryCodes(nCategory)
Parameters:
-
- nCategory: The integer based constant which classified these CodeDecodes from others.
Example:
FillControl: This API is used to fill listboxes or comboboxes with values from a list of CodeDecodes. Returns a collection for subsequent lookups to Code objects used to fill controls.
Syntax:
-
- FillControl(ctlControl, nCategory, nFillType, [nCodeStatus], [colAssignedCodes])
Parameters:
-
- ctlControl: A reference to a passed in listbox or combobox.
- nCategory: The integer based constant which classified these CodeDecodes from others.
- nFillType: The attribute of the CodeDecode which you want to fill. Valid values include:
-
- nCodeStatus: Optional value which filters the Code Decodes according to their Effective and Expiration dates. Valid constants include the following:
-
- colAssignedCodes: Used when filling a control which should fill and include assigned values.
Example:
-
- ‘Declare an instance variable for States collection on object
- Private colStates As CCollection
- ‘Call FillControl API, and set local collection inst var to collection of codes which were used to fill the control. This collection will be used for subsequent lookups.
-
- ‘Below shows an example of looking up the Code value for the currently selected state.
Code objects returned via the “GetCodeObject” or “GetCategoryCodes” APIs can have relations to other code objects. This allows for functionality in which codes are associated to other individual code objects.
The APIs used to retrieve these values are similar to those on the CodesMan interface. The difference, however is that the methods are called on the Codes object rather that the CodesManager interface: Listed below again are the APIs.
GetCodeObject(nCategory, sCode);
GetCategoryCodes(nCategory);
FillControl(ctlControl, nCategory, nFillType, [nCodeStatus], [colAssignedCodes]).
Given below is some sample code to illustrate how these APIs are also called on Code objects.
GetCodeObject Example:
GetCategory Example:
FillControl Example:
The message logging architecture allows message logging in a safe and consistent manner. The interface to the message logging component is simple and consistent, allowing message logging on any processing tier. Both error and informational messages are logged to a centralized repository.
Abstracting the message logging approach allows the implementation to change without breaking existing code.
Best PracticesMessages are always logged by the architecture when an unrecoverable error occurs (i.e., the network goes down) and it is not explicitly handled. Message logging may be used on an as-needed basis to facilitate the diagnosis and fixing of SIRs. This sort of logging is especially useful at points of integration between classes and components. Messages logged for the purpose of debugging have a severity of Informational, so as not to be confused with legitimate error messages.
UsageA message is logged by calling the LogMessage( ) function on the architecture.
Description of Parameters:
vMsg: the standard architecture message
lSeverity: the severity of the message
sClassName: the name of the class logging the message
sMethodName: the name of the method logging the message
sVersion: the version of the binary file (EXE or DLL) that contains the method logging the message
lErrorNum: the number of the current error
sText: an optional parameter containing the text of the message. If omitted, the text will be looked up in a string file or the generic VB error description will be used.
sText: an optional parameter containing the text of the message. If omitted, the text will be looked up in a string file or the generic VB error description will be used.
lLoggingOptions: an optional parameter containing a constant specifying where to log the message (i.e., passing cmLogToDBAndEventViewer to LogMessage will log the error to the database and the event viewer.)
Logging Levels
Before a message is logged, its severity is compared to the log level of the current machine. If the severity of the message is less than or equal to the log level, then the message is logged.
Valid values for the log level are defined as an enumeration in VB. They include:
Example
The database log table is composed of the following fields:
Messages are always logged to the application server's Event Log; however this is not necessarily true for the database as noted by the optional parameter passed to LogMessage, lLoggingOptions. An administrator with the appropriate access rights can connect to the MTS application server remotely and view its Event Log. Only one MTS package contains the Event Log Component, so that errors will all be written to the same application server Event Log.
Events logged via Visual Basic always have “VBRuntime” as the source. The Computer field is automatically populated with the name of the computer that is logging the event (i.e., the MTS application server) rather than the computer that generated the event (typically a client computer).
The same event details that are written to the database are formatted into a readable string and written to the log. The text “The VB Application identified by . . . Logged:” is automatically added by VB; the text that follows contains the details of the message.
Data AccessAll but a few exceptional cases use the “ExecuteQuery” API. This API covers singular database operations in which there exists a single input and a single output. Essentially should only exclude certain batch type operations.
The Data Access Framework serves the purposes of performance, consistency, and maintainability.
Performance
The “ExecuteQuery” method incorporates usage patterns for using ADO in an efficient manner. Examples of these patterns include utilization of disconnected recordsets, and explicitly declaring optional parameters which result in the best performance.
Consistency
This method provides a common interface for development of data access. Given a simple and stable data access interface, best practices can be developed and disseminated.
Maintainability
Since the method is located in a single location, it is very modularized and can be maintained with little impact to its callers.
Application servers often use the ActiveX Data Objects (ADO) data access interface. This allows for a simplified programming model as well as enabling the embodiments to utilize a variety of data sources.
The “ExecuteQuery” MethodOverview
The “ExecuteQuery” method should be used for most application SQL calls. This method encapsulates functionality for using ADO in a effective and efficient manner. This API applies to situations in which a single operation needs to be executed which returns a single recordset object.
Syntax
Parameters
-
- vMsg
- This parameter is the TechArch struct. This is used as a token for information capture such as performance metrics, error information, and security.
- nTranType
- An application defined constant which indicates which type of operation is being performed. Values for this parameter can be one of the following constants:
- cmSelect
- cmSelectLocal
- cmUpdate
- cmInsert
- cmDelete
- An application defined constant which indicates which type of operation is being performed. Values for this parameter can be one of the following constants:
- sSQL
- String containing the SQL code to be performed against the DBMS.
- nMaxRows (Optional)
- Integer value which represent the maximum number of records that the recordset of the current query will return.
- adoTransConn (Optional)
- An ADO Connection object. This is created and passed into execute query for operations which require ADO transactional control (see “Using Transactions” section)
- args (Optional)
- A list of parameters to be respectfully inserted into the SQL statement.
- vMsg
Implementation
In one embodiment of the present invention the “ExecuteQuery” method resides within the MservArch.bas file. This file should be incorporated into all ServerComponent type projects. This will allow each server component access to this method.
Note: Since this method is a public method in a “bas” module, it is globally available from anywhere in the project.
ExecuteQuery utilizes disconnected recordsets for “Select” type statements. This requires that the clients, particularly the CCA's contain a reference to ADOR, ActiveX Data Object Recordset. This DLL is a subset of the ADODB DLL. ADOR contains only the recordset object.
Using disconnected recordsets allows marshalling of recordset objects from sever to client. This performs much more efficiently than the variant array which is associated with using the “GetRows” API on the server. This performance gain is especially apparent when the application server is under load of a large number of concurrent users.
Sample from Client Component Adapter (CCA)
Sample from Server Component
Code Clip from ExecuteQuery (Select Section)
Inserting records requires certain information pertaining to optimistic locking. On the server a unique value is requested to indicate the last time modified. This unique value is returned back to the requestor such that it can be used to later database operations.
Sample from Client Component Adapter (CCA)
Sample from Server Component
Code Clip from ExecuteQuery (Insert Section)
Updating records requires certain information pertaining to optimistic locking. On the server a unique value is requested to indicate the last time modified. Also the last read timestamp is used to validate, during the update, that the record has not been modified since last time read.
Sample from Client Component Adapter (CCA)
Sample Code Clip from Server Component
Code Clip from ExecuteQuery (Update Section)
In deleting records the last read timestamp is used to validate, during the delete, that the record has not been modified since last time read.
Sample from Client Component Adapter (CCA)
Sample from Server Component
Code Clip from ExecuteQuery (Delete Section)
Database Locking ensures the integrity of the database in a multi-user environment. Locking prevents the common problem of lost updates from multiple users updating the same record.
Solution OptionsPessimistic Locking
This policy of locking allows the first user to have full access to the record while following users are denied access or have read only access until the record is unlocked. There are drawbacks to this method of locking. It is a method that is prone to deadlocks on the database as well poor performance when conflicts are encountered.
Optimistic Locking
The optimistic approach to record locking is based on the assumption that it is not normal processing for multiple users to both read and update records concurrently. This situation is treated as exceptional processing rather than normal processing. Locks are not actually placed on the database at read time. A timestamp mechanism is used at time of update or delete to ensure that another user has not modified or deleted the record since you last read the record.
A preferred embodiment of the present invention uses an optimistic locking approach to concurrency control. This ensures database integrity as well as the low overhead associated with this form of locking. Other benefits to this method are increased availability of records to multiple users, and a minimization of database deadlocks.
Table candidates for concurrency control are identified during the “Data Modeling Exercise”. The only table which is updated concurrently is the Optimistic Locking mechanism. Once these are identified, the following is added to the application.
Add “N_Last_Updt” field to table in database;
Error Handling routines on those operations which modify or delete from this table; and
Display/Notification to user that the error has occurred.
UsageThe chart below describes the roles of the two basic types of components to enable optimistic locking.
Assumption: The optimistic locking field is of type Date and is named “N_Last_Updt”
When retrieving records from a database, if the search criteria is too broad, the amount of data required to be retrieved from the database and passed across the network will affect user perceived performance. Windows requesting such data will be slow to paint and searches will be slow. The formation of the database queries is made such that a workable amount of data is retrieved. There are a few options for addressing the problems that occur from large result sets. The options are given below in order of preference.
Redesign the interface/controller to return smaller result sets. By designing the controllers that present the database queries intelligently, the queries that are presented to the database server do not return a result set that is large enough to affect user perceived performance. In essence, the potential to retrieve too many records indicates that the UIs and the controllers have been designed differently. An example of a well designed Search UI is one where the user is required to enter in a minimum search criteria to prevent an excessively large result set.
Have Scrollable Result Sets. The scrolling retrieval of a large result set is the incremental retrieval of a result subset repeated as many times as the user requests or until the entire result set is obtained. Results are retrieved by the Bounded Query Approach where the first record is determined by a where clause with calculated values.
Scrollable Result Set Client RequirementsPreferred UI
The preferred displays are as follows:
Returned results are displayed in a GreenTree List Box;
An action button with the label More . . . is provided for the user to obtain the remaining results;
The More button is enabled when the user has performed an initial search and there are still results to be retrieved;
The More button is disabled when there are no more results to retrieve;
The List Box and the Action button is contained within a group box to provide a visual association between the button and the List Box.
Bounded QueryQueries that are implemented with the limited result sets are sent to the server. The server implements the executeQuery method to retrieve the recordset as usual. Limited result queries have an order by clause that includes the business required sort order along with a sufficient number of columns to ensure that all rows can be uniquely identified. The recordset is limited by the nMaxRows variable passed from the client incremented to obtain the first row of the next result set. The return from the component is a recordset just the same as with a query that is not limited. The CCA 208 creates the objects and passes these back to the controller 206. The Controller 206 adds this returned collection of object to its collection of objects (an accumulation of previous results) and while doing so will performs the comparison of the last object to the first object of the next row. The values necessary to discriminate the two rows are added to the variant array that is necessary to pass to the component for the subsequent query.
The Controller 206 on the client retains the values for nMaxRows, the initial SQL statement, and array of values to discern between the last row of the previous query and the first row of the next query. The mechanism by which the controller 206 is aware that there are more records to retrieve is by checking the number of results is one greater than the max number of rows. To prevent the retrieval of records past the end of file, the controller 206 disables these functions on the UI. For example, a command button More on the UI, used to requested the data, is disabled when the number of objects returned is less than nMaxRows+1.
Application ResponsibilityServer
The Server component is responsible for creating a collection of arguments and appending the SQL statement to add a where clause that will be able to discriminate between the last row of the previous query and the first row of the next.
CCA
The CCA 208 processes the recordset into objects as in non limited queries. The CCA 208 forwards the variant array passed from the Controller 206 to identify the limited results.
Controller
The controller 206 has the responsibility of disabling the More control when the end of file has been reached. The controller 206 populates the variant array (vKeys) with the values necessary to determine start of next query.
ExampleA CCA 208 is coded for a user defined search which has the potential to return a sizable result set. The code example below implements the Bounded Query approach.
On the Server the developer codes the query as follows:
To determine the additional where clause necessary to determine the starting point of the query, the following method is added:
On the CCA 208, allowance must be made for the passing of the vKeys
Public Function RetrieveBusinessObjects(vMsg As Variant, sSql As String, nMaxRows As Integer, Optional ByVal vKeys As Variant) As CCollection
The controller initiates the query and updates the variant array of keys and form 204 properties based on the return. In addition to the code shown for the example below, the More Control is enabled if the search is cleared.
During class initialization perform the following:
Search reset functionality is kept outside of initialization so this may be called from other parts of the application.
In order to retain the values to discriminate between the last row of the result set and the first row of the next the following method on the controller is used:
For this example let nMaxRows=3. The business case calls for the result set to be ordered by the last name, and developer knows that any row can be uniquely identified by the FirstName, LastName, and Unique ID fields so the initial SQL added as a constant in the controller should be;
-
- SELECT * FROM Person ORDER BY LastName, FirstName, Unique_ID
Initial Query
The first query is sent with an empty vKeys Array. When the server receives this query, the method ArgumentsForBusinessObject identifies the elements as being empty and does not populate the colArgs. The query is executed with the intial SQL unchanged. The recordset of size nMaxRows+1 is returned to the CCA 208 and processed the same as non-limited results. The CCA 208 returns the collection of objects to the controller 206. The controller 206 proceeds to populate the vResults collection with the returned objects. vResults is the comprehensive collection of objects returned. When the last object of the first request is reached (at nMaxRows), the values are stored in vKeys as such;
vKeys(0)=LastName (Barleycorn)
vKeys(1)=FirstName (John)
vKeys(2)=Unique_ID (512)
When the First Object of the next request is reached (at nMaxRows+1), comparison of the object variables against the vKeys values is performed. Because the last names match, vKeys(2) will not be deleted and no further checks are performed.
Subsequent Query
The subsequent query will pass vKeys along with it. The server creates the collection of arguments from vKeys and append the sSql string in accordance. The sSql statement that is passed to execute query is
-
- SELECT * FROM Person ORDER BY LastName, FirstName, Unique_ID WHERE ?>=? AND ?>=? AND ?>?
This sSql and collection is included in the call to ExecuteQuery which merges the arguments with the string relying on the architecture method MergeSQL to complete the SQL statement.
The starting point of the recordset is defined by the WHERE clause and the limit is set by the nMaxRows value.
Query Less Restrictive WHERE Criteria
After the second query the last row of the query is David Dyson and the next is Bobby Halford. Because the last name is different, vKeys will be empty except for vKeys(0)=Dyson.
The ProcessObjectCollection will populate vKeys as follows when processing nMaxRows object:
vKeys(0)=LastName (Dyson)
vKeys(1)=FirstName (David)
vKeys(2)=Unique_ID (98)
After identifying the differences between vKeys values and the nMaxRows+1 object the vKeys array is updated as follows:
vKeys(0)=LastName (Dyson)
vKeys(1)=Empty
vKeys(2)=Empty
The query that is returned from ArgumentsForBusinessObject is
-
- SELECT * FROM Person ORDER BY LastName, FirstName, Unique_ID WHERE ?>?
and the colArgs possessing the fieldname FirstName and the value (“David”). ExecuteQuery merges the arguments with the sql statement as before and returns the value.
Ending
After the fifth iteration the result set will only possess 2 records. When the controller 206 processes the returned collection the counter returned from ProcessObjectCollection is less than nMaxRows+1 which indicates that all records have been retrieved.
Security FrameworkImplementation
It can be seen from
Client
User Authentication:
User authentication is handled via a method located in the Security object 802 called IsOperAuthorized. As the Application object loads, it calls the IsOperAuthorized method, with the operation being “Login”, before executing further processing. This method subsequently calls a authentication DLL, which is responsible for identifying the user as an authorized user within the Corporate Security.
UI Controllers:
The UI Controllers limit access to their functions by restricting access to specific widgets through enabling and disabling them. The logic for the enabling and disabling of widgets remains on the UI Controller 206, but the logic to determine whether a user has access to a specific functionality is located in the Security object 802 in the form of business rules. The UI Controller 206 calls the IsOperAuthorized method in order to set the state of its widgets.
Server
Server security is implemented by restricting access to the data in three different ways:
Server Security Method
Server Components 222 call the IsOperAuthorized API in the Architecture before executing every operation. In all cases the Security object 802 returns a boolean, according to the user's access rights and the business rules
SQL Filtering
Includes security attributes, like claim sensitiveness or public/private file note, into the SQL statements when selecting or updating rows. This efficiently restricts the resulting data set, and avoids the return of restricted data to the client.
DescriptionAny GUI related security is implemented at the Client using the Security object 802. The information is available both at the Client Profile and Business Objects 207 which enables the security rules to be properly evaluated.
IsOperAuthorized is called to set widgets upon the loading of a UI or if there is a change of state within the UI.
User authentication always is used by the Application Objects 202 in order to validate user privilege to launch the application.
SQL Filtering is used in the cases where sensitive data must not even be available at the Client, or where there is a great advantage on reducing the size of the data set returned to the Client.
SQL Filtering is only used in very rare cases where performance is a serious concern. It is used carefully in order to avoid increased complexity and performance impacts because some queries can be cumbersome and embedding security on them could increase complexity even more.
Security FrameworkOverview
The Security object 802 serves the purpose of holding hard coded business rules to grant or deny user access for various application functions. This information is returned to the UI controllers 206 which make the necessary modifications on the UI state. The ClientProfile object serves the purpose of caching user specific (and static) security information directly on the client. This information is necessary to evaluate the business rules at the Security object 802.
RelationshipsArchitecture Object
The TechArch object is responsible for providing access and maintaining the state of the ClientProfile 902 and Security objects 802. The ClientProfile object 902 is instantiated and destroyed in the TechArch's initialization and terminate methods, respectively. This object is maintained through an instance variable on the TechArch object.
CInitCompCCA
The CInitCompCCA object 904 provides two services to the architecture object 200, it serves as an access point to the CInitComp Server 906, and it Marshalls the query result set into a ClientProfile object 902.
CInitComp
The CInitComp server object 906 provides data access to the data that resides in the organization tables 908. This data is useful on the client to determine level of access to data based on hard coded business rules.
Organization Tables
The Organization tables 908 contain user, employee and unit information necessary to build the hierarchy of information necessary to determine level of access to sensitive information.
Client Profile
The ClientProfile object 902 serves the purpose of caching static, user specific security information directly on the client. This information is necessary to determine data access level of information to the user, which is accomplished by passing the necessary values to the Security object 802.
Security Object
The Security Object 802 contains business rules used to determine a user's access privileges in relation to specific functions. The object accepts certain parameters passed in by the various UI Controllers 206 and passes them to through the business rule logic which, in turn, interrogates the Client Profile object 902 for specific user information.
Client ProfileAttributes
The following are internal attributes for the Client Profile object 902. These attributes are not exposed to the application and should only be used by the Security object 802:
-
- sProfile:
- This attribute is passed by the legacy application at start-up and contains the user's TSIds, External Indicator, Count of Group Elements and Group Elements. It is marshalled into these attributes by request of the application objects.
- colSpecialUsers:
- This attribute caches information from a table containing special users which do not fit into one of the described roles, such as Organization Librarian. (e.g., Vice President or CEO of the corporation.)
- sTSId:
- This is the current users' TSId, and it corresponds to his/her Windows NT Id. It is used to get information about the current logged on user from the Organizational Tables 908.
- sEmployeeId:
- This corresponds to the user's employee Id, as stored in the Organizational tables 908. It is used against the passed in employee Id, in order to check relationship between performers and the current user.
- sEmployeeName, sEmployeeFirst, sEmployeeMI and sEmployeeLast:
- All these attributes correspond to the current user's name.
- dictClientPrivileges:
- This attribute contains a collection of identifiers that indicate what role/authority an individual plays/possesses. This value is used to identify the static role of the logged in user.
- These values are used for security business logic which grants or denies access based on whether the user is internal or external, or whether the user is in a given administrative role. Existing values are the following:
- SC—Indicates sensitive Claim authority
- CC—Indicates Change Claim status authority
- MT—Indicates maintain F&C Templates authority
- MO—Indicates maintain Organization authority
- MR—Indicates maintain Roles authority
- The following are the proposed additions:
- TA—Indicates authority to execute Task Assistant
- FN—Indicates authority to execute FileNotes
- CH—Indicates authority to execute Claim History
- TL—Indicates authority to maintain Task Templates
- dictProxyList:
- This attribute contains an employees' reporting hierarchy. It is used to determine whether the current user/employee has permission to perform some action based on his/her relationship to other users/employees within their hierarchy. A business example of this is the case of a supervisor, who has rights to view information that his/her subordinates have access to. The relationship API's make use of dictProxyList to determine if the user assigned to the information is super or subordinate of the current user.
- boolInternal:
- This attribute indicates whether the logged in user is external or internal. It is also marshalled from the sProfile attribute, passed in by the legacy application.
Public Methods
The following are the APIs exposed by the Client Profile object. These APIs are used for security checking by the Security object and should not be used by the developers in any portion of the application.
-
- GetAuthorizedEmployees As Collection
- This function returns a collection of employee Ids from the employees supervised by the current user.
- IsSuperOf(sUserId) As Boolean
- This API returns true if the logged in user is a super of the passed in user Id. It looks up the sUserId value inside the dictProxyList attribute.
- IsRelativeOf(sUserId) As Boolean
- This API returns true if the passed in user Id corresponds to either the logged in user or someone from the dictProxyList.
- IsInternal As Boolean
- This API is used to grant or restrict the user to information based on whether the data is private to the organization whether the user is internal or external.
- IsInRole(sRole) As Boolean
- This API looks up the appropriate sRole value contained within the dictClientRoles attribute to determine whether the current user is authorized to perform that role.
The following accessors are used to get data from the Client Profile's object:
-
- UserId: returns sTSId
- EmployeeId: return sEmployeeId
- EmployeeName: returns sEmployeeName
- EmployeeFirstName: returns sEmployeeFirst
- EmployeeLastName: returns sEmployeeLast
- EmployeeMiddleInitial: returns sEmployeeMI
- ExpandTree: returns boolExpandTreePreference
- TemplatePathPreference: returns sTemplatePathPreference
Public Methods
The following API is exposed by the Security Object and is used by the application for security checking:
-
- IsOperAuthorized(vMsg As Variant, nOperations As cmOperations, vContext As Variant) as Boolean
- This API will return true or false depending on what is returned from the business rule functions to determine user access levels. This API is called on two situations:
- 1. When setting the initial state before loading the form. If a security requirement exists, IsOperAuthorized is called for the appropriate operation.
- 2. After any relevant change on the UI state. For example, when a sensitive claim is highlighted on the Task Assistant window. A relevant change is one which brings the need for a security check.
- The valid values for the enumeration and the correspondent context data are:
- cmMaintainFormsCorr (none)
- cmRunEventProcessor (none)
- cmWorkOnSensitiveClaim (a Claim object)
- cmMaintainPersonalProfile (none)
- cmMaintainWorkplan (none)
- cmDeleteFileNote (a File Note object)
- cmMaintainTaskLIbrary (none)
- cmMaintainOrg (none)
-
- IsSVCOperAuthorized(vMsg As Variant, sOperations As String, vContext As Variant) as Boolean
- This API is called by every method on the server that persists data or can potentially access sensitive data (reactive approach).
- IsOperAuthorized(vMsg As Variant, nOperations As cm Operations, vContext As Variant) as Boolean
- This API is available for those cases where a proactive security check is needed on the server.
The following examples show some ways to implement the options described above:
Client
-
- Business Logic
- IsOperAuthorized
- Let's consider the case of the Task Assistant window, where the user should not be allowed to view any information on a sensitive claim if he/she is not the claim performer or the performer's supervisor. The following code would be at the Controller:
-
- Let's consider the case of the Maintain Correspondence Search window where only a user who is a Forms and Correspondence Librarian should be allowed to delete a template. The following code would be at the Controller:
Server
-
- SQL Filtering:
- Let's consider the example of the Draft File Note window, where a user can only look at the draft file notes on which he/she is the author. At the controller, one would have:
-
- And at the Component, the SQL statement would be:
This application runs on the server as a background process or service with no direct interaction with Client applications, so it doesn't need any GUI related security. Basically, its main actions are limited to the generation of new tasks in response to externally generated events or, more specifically, it:
-
- Reads static information from the Task Template tables;
- Reads events from the Event tables;
- Inserts tasks on the Task table.
In this sense, its security is totally dependent on external entities as described below:
-
- The Task Library application is the entrance point for any changes on the Task Template database tables. It will make use of the options described above in order to fulfill its security requirements.
- Events are generated from legacy applications, so the Task Engine relies completely on the security implemented for these applications in order to control the generation of events.
- Another level of security for event generation relies on the Database authorization and authentication functions. Only authorized components have access to the database tables (this is valid for all the other applications as well).
Definition
The Claim Folder manages claim information from first notice through closing and archiving. It does this by providing a structured and easy to use interface that supports multiple business processes for handling claims. The information that it captures is fed to many other components that allow claims professionals to make use of enabling applications that reduce their workload. Because physical claim files are still required, the claim folder provides capabilities that support physical file tracking. It works with the LEGACY system to support all the capabilities that exist within the current system.
The primary processes supported by the Claim Folder are:
-
- First Notice of Loss
- The Claim Folder is the primary entry point for new loss information. Claim files exist in the Claim Folder before they are “pushed” to the LEGACY system to perform financial processing.
- Claim Inquiry
- Claim Folder supports internal and external inquires for claim information. The folder design allows quick access to various levels of information within the claim for many different reasons.
- Initiation of Claim Handling
- The Claim Folder provides initial loss information to the claim professional so they may begin the process of making first contacts with appropriate participants in the claim. It allows them to view and enter data received through their initial contacts and investigation.
- Investigation and Evaluation
- The Claim Folder provides access to detailed information needed for the investigation and evaluation process. It allows the claim handler to navigate between all the applications and information they need to support these processes.
- Identifying Claim Events
- The Claim Folder identifies critical events that occur in the life of a claim, such as a change of status, which can trigger responses in other components to perform automated functions, like triggering tasks in the Task Assistant.
- Managing the Physical File
- The Claim Folder supports better tracking capabilities for the physical files that go along with the electronic record of a claim.
- First Notice of Loss
Value
By capturing detailed information on claims, the Claim Folder tries to improve the efficiency of claim professionals in many ways. First, because the information is organized in a logical, easy to use format, there is less digging required to find basic information to support any number of inquiries. Second, the Claim Folder uses its information to support other applications like Forms and Correspondence, so that claim information does not have to be reentered every time it is needed. Third, it provides better ways to find physical files to reduce the time required finding and working with them. Beyond this, there are many other potential uses of claim folder information.
The Claim Folder also tries to overcome some of the current processing requirements that the LEGACY system imposes such as recording, losses without claims, requiring policy numbers for claim set-up, requiring reserves for lines, and other restrictions. This will reduce some of the low-value added work required to feed the LEGACY system.
Finally, the Claim Folder organizes and coordinates information on participants and performers so that all people involved in a claim can be identified quickly and easily.
Key Users
Although claim professionals are the primary users of the Claim Folder, any claims professional can utilize the Claim Folder to learn about a claim or answer an inquiry about a claim.
Component Functionality
Because the Claim Folder is the primary entry point for new claims, it needs to capture information necessary to set-up new claims and be able to pass the information to the LEGACY system. Once the information is passed, the LEGACY system owns all information contained in both systems, and it is uneditable in the Claim Folder. However, the Claim Folder has more information than what is contained in the LEGACY system, and therefore allows certain information to be entered and modified once the claim is pushed to the LEGACY system.
The Claim Folder decomposes a claim into different levels that reflect the policy, the insured, the claim, the claimants, and the claimant's lines. Each level has a structured set of information that applies to it. For example, the claim level of the claim has information on the claim status, line of business, and performers. An individual line has information which includes the line type, jurisdiction, and property or vehicle damages. The claimant level contains contact information as well as injury descriptions.
The information at each level is grouped into sections for organization purposes. Each level has a details section that includes the basic information about the level.
The key levels on the Claim Folder and their information sections are:
-
- The Policy Level: Details and Covered Auto for auto claims, Covered Property for property claims and Covered Yacht for marine claims.
- The Claim Level: Details, Facts of Loss, Events, Liability. Liability is considered part of the Negotiation component and described there.
- The Participant Level: Details and Contact Information. For claimants, additional sections are shown to display, Events, Injury and Disability Management. The participant level is discussed in the Participant Component.
- The Line Level: Details, Damaged Vehicle for vehicle lines, Damaged Property for property lines, Damaged Yacht for marine lines, Events, Damages, and Negotiation. Damages and Negotiation are considered part of the Negotiation component and described there.
Events are triggered in the Claim Folder by performing certain actions like changing a jurisdiction, identifying an injury, or closing a line. Other general events are triggered in the Event Section on most levels by clicking the one that has occurred. These events are processed by the Event Processor and could generate any number of responses. In one embodiment of the present invention, the primary response is to trigger new tasks in the Task Assistant for a claim.
User Interfaces
-
- Claim Folder UI
- Policy Level—Policy Details Tab
- Policy Level—Covered Vehicle Tab
- Policy Level—Covered Property Tab
- Policy Level—Covered Yacht Tab
- Claim level—Claim Details Tab
- Claim level—Facts of Loss Tab
- Claim level—Events Tab
- Claim level—Liability Tab
- Line level—Line Details Tab
- Line level—Damaged Property Tab
- Line level—Damaged Auto Tab
- Line level—Damaged Yacht Tab
- Line level—Events Tab
- Line level—Damages Tab
- Line level—Negotiation Tab
- Task Assistant
- File Notes
- Claim History
- Search Task Template
- Search for Correspondence
- Find Claims
- Version 7
- View File Folder
- Print Label
Claim Tree
The claim tree in the Claim Folder window decomposes the claim into policy, insured, claim, claimant, and line levels depending on the specific composition of the claim.
The policy level is always the first node in the claim tree and is identified by the policy number. Before the policy number is entered, the field is listed as “Unknown”. If a claim is uncoded, the field is listed as “Uncoded”. Selecting the policy level brings up the policy level tabs in the body of the Claim Folder.
The insured level is always the second node in the claim tree and is identified by the insured's name. Before the insured is identified, the field is listed as “Unknown”. Selecting the insured level brings up the insured participant tabs in the body of the claim folder. Only one insured is listed at this level as identified in the policy level tabs, however, multiple insureds can still be added. Additional insureds are shown in the participant list below the claim tree.
The claim level is always the third node in the claim tree and is identified by the claim number. When the claim level is selected, the claim level tabs appears in the body of the Claim Folder.
After the claim level, all claimants are listed with their associated lines in a hierarchy format. When a claimant is added, a node is added to the tree, and the field identifying the claimant is listed as “Unknown”. Once a participant has been identified, partial or client, the name of the claimant is listed on the level.
When the level is selected, the participant level tabs for the claimant is shown in the body of the claim folder.
Line levels are identified by their line type. Before a line type is selected, the line level is listed as “Unknown”. When a line level is selected, the line level tabs for the specific line are shown in the body of the claim folder.
There are several things that can alter the claim tree once it has been set up. First, if a claimant or line is deleted, it is removed from the claim tree. A claim that is marked in error does not change the appearance of the levels. Second, the claim, claimant, and line levels are identified by different icons depending on whether they are pushed to V7 or not. Third, when a line or claimant is offset, it is identified as such.
Participant List
The participant list box contains all the non-claimant and non-insured participants on the claim. (Claimants and insureds are shown in the claim tree and not repeated here.) Participants are shown with their name and role. When a participant is selected, the participant level tabs are displayed in the claim folder.
Claim Folder Menu Items
The claim folder menus contain the actions that a user would need to perform within the claim folder. They can all be accessed through keyboard selection. The menu options become enabled or disabled based on the state of the Claim Folder. The Claim Folder can be in view mode or edit mode for a specific level in the Claim Tree. When the Claim Folder is in edit mode, most options are disabled until the user saves their changes and is returned to view mode. The enabling/disabling of menu options is also dependent on whether the claim or portions of the claim have been pushed to V7.
Claim Folder Tool Bar
The tool bar represents common action that a user performs that can be easily accessed by clicking the appropriate icon. There are five groups of button on the Claim Folder tool bar that represent, in order, common activities, adding new items to a claim, launching utilities, performing V7 activities, and accessing help functions. The enabling/disabling of tool bar buttons follows the same logic as for menu items.
Window Description
Window Details
CAR Diagram
Data Elements
Commit Points
Claim Save Menu Option—Saves all claim level data
Policy Save Menu Option—Saves all policy level data
Participant Save Menu Option—Saves all participant level data
Line Save Menu Option—Saves all line level data
Claim Close Claim Folder Menu Option—Prompts user to save changes if in edit mode.
Claim HistoryDefinition
Claim history shows information in one user interface that is intended to include all the constituent elements of a claim file. The four types of history included in the component are searchable by common indexing criteria like participant, performer, and claim phase. A caption report can be produced which shows the history selected in a document format.
Value
Claim history provides the users with one common interface through which to view a large variety of information about the claim. It includes all history available on a claim, and is expanded as claim capabilities are built, like incoming mail capture. Users develop customized views of history based on any criteria the history can be indexed by, and these reports are saved as customizable Word documents. The way the history information is indexed provides quick access to pertinent data needed to respond to a variety of requests.
Key Users
All members of the claims organization can use claim history as a way to quickly see all activity performed on a claim. This utility increases the ability to locate key information regarding any claim.
Component Functionality
Claim history is a component that contains a simple process to retrieve history from the other components in the system. It contains no native data itself. Even viewing a history element is done in the component window where the item was first captured.
The second key process of claim history is to produce a caption report of all history elements according to the items the user wants to include.
There are two user interfaces needed for this component that correspond to the two key functions above:
-
- Claim History Search: This window utilizes the claim phase, participant, performer'and history type fields on each history record to help the user narrow the search for specific history.
- Caption Report: This report uses the functionality of Word to produce a report of each history item the user wants to see and its associated detail. Since the report is produced in Word, it can be fully customized according to many different needs.
User Interfaces
-
- Claim History Search
- Caption Report (Word document, not UI design)
Definition
The Forms & Correspondence component supports internal and external Claim communication and documentation across all parts of the claims handling process.
The Forms and Correspondence—Create Correspondence function provides the ability to search for a template using various search criteria, select a template for use and then leverage claim data into the selected template.
The Forms and Correspondence—Template Maintenance function is a tool for the librarian to create, delete, and update Correspondence templates and their associated criteria.
Some specific processes supported by Forms & Correspondence are:
-
- Reporting of claims
- to state/federal agencies, etc. at First Notice of Loss
- internal requests for information
- Advising Participants
- Contacting Participants
- Performing Calculations
- Creating correspondence for claims or non-claims
- Reporting of claims
Value
The Forms and Correspondence component supports user in creating documentation.
Leveraging information from the claim directly into correspondence reduces the amount of typing and dictating done to create forms and letters. The typical data available to the templates should include: author, addressee, claim number, date of loss, insured name, policy number, etc. A librarian adds and maintains standardized forms and letters in logical groupings made available for the entire company.
Key Users
Claim employees are the primary users of the Forms and Correspondence component, but it can be used by anyone who has access to the system to create documents using existing templates.
Forms and Correspondence librarians use the system to create, update or remove templates.
Component Functionality
Forms and Correspondence—Create Correspondence
1. Search for a template based on search criteria.
2. Create a correspondence from a template using claim data.
3. Create a correspondence from a template without using claim data.
4. View the criteria for a selected template.
5. View the Microsoft Word template before leveraging any data.
Forms and Correspondence—Template Maintenance
1. Search for a template based on search criteria.
2. Create, duplicate, edit, and delete Correspondence templates and their criteria.
3. Internally test and approve newly created/edited templates.
4. Properly copy Word templates for NAN distribution.
User Interfaces
-
- Search for Correspondence
- Correspondence Details
- Associate Fields
- Maintain Correspondence Search
- Correspondence Template Information—Details tab
- Correspondence Template Information—Criteria tab
- Microsoft Word
Definition
File notes captures the textual information that cannot be gathered in discrete data elements as part of claim data capture. They are primarily a documentation tool, but also are used for internal communication between claim professionals. Users can sort the notes by participant or claim phase (medical, investigation, coverage, etc.) in order to permit rapid retrieval and organization of this textual information.
Value
File notes speeds the retrieval and reporting of claim information. A file notes search utility with multiple indexing criteria provides claim professionals and supervisors with the ability to quickly find a file note written about a particular person or topic. The file notes tool utilizes modern word processing capabilities which speed entry, reduce error, and allow for important information to be highlighted. Furthermore, the categorization and key field search eases the process of finding and grouping file notes. Finally, file notes improves communication as they can be sent back and forth between those involved in managing the claim.
Key Users
All members of the claims organization can utilize file notes. External parties via RMS can view file notes marked General. This utility increases the ability to locate key information regarding a claim. Anyone who wants to learn more about a claim or wants to record information about a claim utilizes the file notes tool.
Component Functionality
File Notes searching is included as part of the claim history component which allows the user to search the historical elements of a claim file including tasks, letters, and significant claim change events.
The user interfaces that are needed for this component are:
-
- The File Notes Search (part of Claims History component): This window utilizes the claim phase fields on the file notes record to help the user narrow the search for specific file notes. Also, it allows users to view all file notes that meet specified criteria in a report style format.
- File Notes Entry: The window used to record the file note. It embeds a word processing system and provides the ability to categorize, indicate a note as company (private) vs. general (public), save the note as a draft or a final copy, and send the note to another person.
User Interfaces
-
- File Notes
- Draft File Note Review
- Participant Search
- Performer Search
Definition
Address Book is the interface between the claims system and the Client database. The Client application is a new component designed to keep track of people or organizations that interact with RELIANCE for any reason, but claims are most likely the first application to use Client. The Address Book is accessed directly from the Desktop and from the Claim Folder.
The Address Book meets several needs within the claim organization. Although, its primary function is to support the adding of participants to a claim, it acts as a pathway to the Client database for searching out existing participants, and adding new people or organizations to the corporate database.
The Client database maintains information on names, addresses, phone numbers, and other information that always applies to a person or organization no matter what role they play on a claim.
Value
Address Book provides a common definition of people or organizations that interact with RELIANCE, and therefore provides a much more efficient means of capturing this information. Each Client database entry provides the ability to link a person or organization to all the different roles that they play across the organization, and therefore makes retrieving information on a client by client basis quick and easy.
There are many benefits to RELIANCE by having a common address book. Information on people and organizations is leveraged into other activities like enabled tasks that lookup a client's phone numbers when a call needs to be made. Information that has been redundantly stored in the past can be entered once and reused. Once all areas of RELIANCE use the Client application, different areas of the company can share definitions of individuals and organizations.
Component Functionality
Address Book allows users to add, edit and delete records from the Client database. It also provides a robust search facility, including phonetic name searches to find people contained in the Client database.
There are two primary user interfaces for the Address Book:
-
- Find Address Book Entry—This is a search window that allows a user to find records in the Client database using names, addresses, phone numbers, and other identifiers. From this window, specific records can be selected and attached as participants on claims.
- Maintain Address Book Entry—This window allows users to add or edit information about a client by specifying their names, addresses, phone numbers, email information, and identification numbers like a SSN or TIN.
The Address Book is created concurrently with the Client application to make sure that a consistent design approach is followed.
Key Users
All members of the claim organization use the Address Book to look up information on people and organizations in the client database. Those who set up and handle claims use the Address Book to identify participants.
User Interfaces
-
- Find Client
- Maintain Client
Definition
The Index, or Claim Search, component provides the ability to locate claims within the system using various search criteria. The criteria cover a wider variety of search capabilities than exist today including, but not limited to, claim performers, participants, phonetic name searches, addresses, roles, offices, and lines of business. The search results display selected claim, participant, and performer data to help identify each claim.
The Index component also allows easy navigation to various claim components like the Claim Folder, once a claim has been identified. It can be accessed from the Desktop and from any open Claim Folder.
The Index component is designed to support several business processes within the claim organization. Its functions are critical to improving claim staff productivity and customer service in the following areas:
-
- Matching Mail
- The capabilities of the Index search make it easier to identify the claim a piece of mail belongs to based on criteria used to identify claims in forms, correspondence, and bills. The performers for a claim can also be identified for mail routing purposes.
- Phone Inquiries
- This window is the primary point to handle incoming phone inquiries for any claim. Users can find claims quickly without having to burden the caller with requests for additional information.
- Duplicate Claims
- Prior to setting up new claims, checks can be done to ensure that the claim has not already been entered into the system. The additional search capabilities provide a greater assurance that duplicate claims will not be entered. This reduces the need to delete or merge claim records.
- Fraud Identification
- Because claims can be searched easily by participant and other criteria, fraud questions can be easily researched. This is not the primary purpose of this component, however.
Value
Index reduces the time required to find existing claims, and also reduces potential rework from not finding claims when they are needed for matching mail or duplicate checks.
Key Users
Claim employees are the primary users of the Index window, but it can be used by anyone who has access to the system to access claims without having to memorize tracking numbers.
Component Functionality
Index is primarily a robust search engine that quickly and efficiently searches for claims. It is not a component that stores its own data, as it is primarily focused on pointing users more quickly and directly to claim data.
Index is composed of one search window that follows the format of all other search windows in the system.
User Interfaces
-
- Find Claims
Definition
The Injury component captures versions of a claimant's injuries as they progress. This window captures injury information in the form of discrete data fields, reducing the need for free form text file notes. Capturing data, instead of text, allows the injury to be closely tracked and quickly reported. The data can also serve as feedback statistics, i.e. for building best claims practices and in risk selection. The preferred method of identifying and documenting injuries is the ICD-9 code. The user can enter or search for the ICD-9 code using descriptors or numbers.
Value
Data on every injury is captured and summarized in a consistent, accessible format, making recording and reviewing the case considerably less time consuming and more organized, allowing the adjuster to focus on desired outcomes. This “snapshot” of the current status and history of an injury greatly facilitates handing off or file transfers between claim professionals. Additionally, the discrete data field capture enables the use of events to identify action points in the lifecycle of a claim that has injuries.
Key Users
All members of the claims organization can utilize the Injury component. This component increases the ability to locate and summarize key information regarding an injury.
Component Functionality
Injury is an aspect of participant information, which is related to the claimant participants on the claim. The participant component relates clients to all other claim-related entities. Information on injuries will be related to participant records and displayed at the participant level information in the Claim Folder. New entities are needed to implement injury data capture: injury and ICD-9 search. The Injury component interacts with five other components: Claim Folder-which contains Disability Management data about a claimant; Participant—which lists the individuals associated with the claim; as well as File Notes, Task Assistant and the Event Processor. The injury component also uses Microsoft WORD to create a formatted, historical injury report for a particular individual.
The user interfaces that are needed for this component are:
-
- Injury: This is the primary injury window which captures basic injury report data, including: the source of the injury report, the date of the injury report, a Prior Medical History indicator, and then a detailed list of the injuries associated with that report. The detailed list includes discrete fields for the following data: ICD-9 code, body part, type, kind, severity, treatment, diagnostic, a free form text description field, and a causal relation indicator.
- ICD-9: This is the search window for locating ICD-9 codes and associated descriptions.
- Disability Management: This window contains a subset of participant data fields that enables more effective injury management.
User Interfaces
-
- Claim Folder—Participant Level—Injury Tab
- ICD-9 Search Window
- Claim Folder—Participant Level—Disability Management Tab
Definition
Value
Data on every case is summarized in a consistent, accessible format, making recording and reviewing the case considerably less time consuming and more organized, allowing the adjuster to focus on negotiation strategy and desired outcomes. This “snapshot” of the current status greatly facilitates handing off or file transfers between claim professionals. Additionally, the discrete data field capture enables the use of events to identify action points in a negotiation.
Key Users
All members of the claims organization can utilize Negotiation. This component increases the ability to locate and summarize key information regarding a negotiation.
Component Functionality
Negotiation is a type of resolution activity, which is part of the claim component of the claims entity model. The claim component is the central focus of the claims entity model, because it contains the essential information about a claim. The claim component supports the core claim data capture functionality, first notice processes, and resolution activity for claims. The main types/classes of data within the claim component are: Claim, Claimant, Line, Claim History, Resolution Activity, Reserve Item, and Reserve Item Change. Three entities are needed to implement negotiation: resolution activity, claim and claim history. There is also interaction between the Negotiation component and the Task Assistant, File Notes and Event Processor components.
The user interfaces needed for negotiation are:
-
- Negotiation: This window captures demand and offer data, including: amount, date, type and mode of communication. The target settlement range, lowest and highest, is captured, along with strengths and weaknesses of the case.
Supporting user interfaces, which are also part of the Claim Folder, include:
-
- Liability (claim level tab): This window is used to document liability factors in evaluating and pricing a claim. The liability factors include percent of liability for all involved parties; form of negligence that prevails for that jurisdiction; theories of liability that the claim handler believes to be applicable to the claim. Used prior to developing negotiation strategy.
- Damages (line level tab): This window provides the capability for pricing and evaluating a claim based on incurred and expected damages. Used prior to developing negotiation strategy.
User Interfaces
-
- Claim Folder—Line Level—Negotiation Tab
- Claim Folder—Claim Level—Liability Tab
- Claim Folder—Line Level—Damages Tab
Definition
In one embodiment of the organization component 1100, all employee records are kept in a common database 1102 so that they can be attached to the specific claims they work, located in a claim database 1104. The common information that is kept on the employee record includes name, location, phone, and some minimal organizational context information like office or division. This is the minimum required to support the tracking of performers on claims. The employee information 1102 is then linked 1106 to the claim information 1104 and the databases are updated 1108. Having linked the employees 1102 with the claims 1104 they are working on, the database can be searched by employee or claim 1110.
However, this version of the organization can be expanded to include organization relationships (specifically tracking where an employee falls in the organization structure), groups of individuals as performers for claim assignment, and claim allocation within the organization structure. These capabilities are to support any notion of caseload analysis, management reporting, or automated assignment that would need to be included.
Value
By tracking common definitions of employees across claims, indexing capabilities are improved and performers on claims are accurately tracked.
Key Users
The primary users of the organization capabilities are the administrative personnel who set up performers, as well as the technicians who track who is working a claim.
Component Functionality
The design of the minimum scope of the organization component includes a search window to find employees in the organization and a detail window to see specific information on each employee.
User Interfaces
-
- Organization Entity Search
- Add/Edit Organization Entity
Definition
The participant component also allows linkages 1206 to be made between participant and to various items on claims. A doctor can be linked to the claimant they treat and a driver can be linked to the damaged vehicle they were driving.
Once a participant has been added to a claim, additional information 1208 that is specific to that claim can be attached. This information includes injury, employment, and many other types of information that are specific to the role that a person or organization plays in a claim.
The business processes primarily supported by Participant 1200 are:
-
- Recording Involvement in a Claim
- There is a basic data capture requirement to keep track of individuals and organizations involved-in a claim, and this is done most efficiently using the participant approach.
- Recording Role Specific Information
- Address Book 1202 stores information that can be reused across claims, but the Participant component 1200 needs to maintain the information that is specific to an individual or organization's involvement in a specific claim.
- Making Contact with Clients
- Because participant ties back to the common Address Book 1202, any contact information contained there can be quickly and easily obtained.
- Forms and Correspondence 1210
- Leveraging address information into letters provides an efficiency enablement to all users who don't need to look up name and address information.
- Categorizing History Information
- Participants are used to categorize history items like tasks and file notes so that information relating to a single participant on a claim can be easily retrieved.
- Claim Indexing
- Attaching participants to a claim allows the Index component to be more effective in the processing of claim inquires.
Key Users
The primary users of the Participant components 1200 are those who work directly on processing claims. They are the ones who maintain the participant relationships.
Claims professionals who deal with injuries use the Participant tabs in the claim folder to track injuries and manage disabilities for a better result on the claim.
Value
Because the Participant component 1200 only seeks to define the roles that individuals and organization play across all claims, there is no redundant entry of name, address, and phone information. This is all stored in the Address Book 1202.
The number of potential participant roles that can be defined is virtually limitless, and therefore expandable, as the involvement of additional people and organizations needs to be captured.
Component Functionality
Most participant functionality is executed within the context of the Claim Folder. The Claim Folder contains participants levels in two ways. First, claimants are shown in the claim tree on the left-hand side of the window. Below this, other participants are shown in a list. Selecting any participant displays a set of participant information tabs that displays the following information:
-
- Participant Details—Basic information about the role that a participant plays in a claim and all the other participants that are associated to it.
- Contact Information—Information from the Address Book on names, addresses, and phone numbers.
- Injury—Specific information on the nature of injuries suffered by injured claimants.
- Disability Management—Information on injured claimants with disabilities.
Only the first two tabs will be consistently displayed for all participants. Other tabs can appear based on the role and characteristics of a participant's involvement in a claim.
Adding or editing participant role information is actually done through the Address Book 1202 search window. The process is as simple as finding the Address Book 1202 record for the intended participant and specifying the role the participant plays in the claim. Once this is done, the participant will be shown in the Claim Folder, and additional information can be added.
The notion of a participant is a generic concept that is not specific to claims alone. It is a based on design pattern that can be expanded as additional claims capabilities are built. Any involvement of an individual or an organization can be modeled this way.
User Interfaces
-
- Participant Level—Participant Details Tab
- Participant Level—Contact Information Tab
- Participant Level—Events Tab
- Participant Level—Injury Tab (Injury Component)
- Participant Level—Disability Management Tab (Injury Component)
- View Participant List
Definition
The Perforer component allows organizational entities (individuals, groups, offices, etc.) to be assigned to various roles in handling the claim from report to resolution. The Performer component is utilized on a claim-by-claim basis.
A performer is defined as any individual or group that can be assigned to fulfill a role on a claim.
The Performer component supports the assignment processes within the claim handling process. This goes beyond the assignment of claim at FNOL. This component allows the assignment of work (tasks) as well.
Some specific processes supported by Performer are:
-
- Assign claims
- identification of different roles on the claims in order to assign the claim (Initiate Claim—DC Process work)
- Keeps roles and relationships of performers within claims
- Assigning tasks
- Reassignments
- Supports Initiate claim process—assignment
- Search mechanism for employees, offices
- All performers should be in the Organization component
- Provides history of assignments
- Assign claims
Value
The Performer component allows the assignment of roles or tasks to individuals or groups. The data about performers resides in a common repository: the Organization component.
The Performer component reduces the time required to find employees, teams or any potential performer, and ensures consistency of data.
Key Users
The primary users of the Performer component are those who work directly on processing claims. They are the ones who maintain the assignment of roles or tasks related to a claim.
Component Functionality
The Performer component supports an informational function and an assignment function.
-
- 1. View details for performers (employee, office, unit, etc.). These details may suggest organizational entity relationships but in no way define or maintain them.
- 2. View all performers assigned to a claim, currently and historically (includes individuals, groups, offices, etc.)
- 3. Assign performers to a claim—at the claim level, claimant, and supplement levels (including individuals, office, groups, etc.)
User Interfaces
-
- Assign Performer
- Performer Roles
- View Performer List
Definition
The Task Assistant is the cornerstone of a claim professional's working environment. It provides diary functions at a work step level that allow the management of complex claim events. It enables the consistent execution of claim best practices by assembling and re-assembling all of the tasks that need to be performed for a claim based on detailed claim characteristics. These characteristics come from regulatory compliance requirements, account servicing commitments, and best practices for handling all types of claims. The Task Assistant also provides mechanisms that automate a portion of or all of the work in performing a task to assist the claim professional in completing his or her work. Once a task is completed, the Task Assistant generates a historical record to document the claim handler's actions.
The Task Assistant is . . .
-
- A method for ensuring consistent execution of regulatory requirements, account servicing commitments and claim handling best practices
- A source of automated assistance for claim professionals
- An organization-wide communication tool within the context of a claim (it does not replace Lotus Notes).
- A mechanism for making claims strategy common practice and sharing corporate experience
- A diary application to keep track of claims
- A historical tracking tool
- A way to get a claim professional's or a team leader's attention
- A mechanism for making process changes in the organization quickly
Within the Task Assistant, claim professionals have the ultimate control to determine if and when tasks need to be completed. They also have the ability to add tasks to the list to represent work they do that is not reflected in standard definitions of tasks in the system. This supports a vision of the claim professional as a knowledgeable worker who spends most of his or her time focused on a successful result through investigation, evaluation, and negotiation of the best possible outcome.
Value
The Task Assistant reduces the time required to handle a claim by providing the claim professional with the automatic scheduling of claim activity. It helps the claim professional remember, perform and record tasks completed for every claim. Completed tasks are self-documenting and remain part of the claim history.
The Task Assistant also ensures the consistent handling of claims throughout the organization, and by doing so can significantly impact expenses and loss costs. Furthermore, it helps ensure regulatory compliance and the fulfillment of account promises. It supports the teamwork required in handling difficult claims as a structure communication mechanism.
The automated enablements for tasks reduce the amount of time claim professionals have to spend on low value-added activities such as writing correspondence. They can therefore spend a larger amount of time investigating, evaluating, and negotiating each claim.
Key Users
While claim professionals are the primary users of the Task Assistant, others use the application as well. The entire claims department utilizes the Task Assistant to structure work and communicate with one another. Team leaders use the Task Assistant to conduct file review and to guide the work of the claim professional. Administrative staff use the Task Assistant as a means to receive work and to communicate the completion of that work. Claim professionals use the Task Assistant to complete work and to request assistance from team leaders and specialty claim professionals.
The Task Assistant requires a new type of user to set-up and maintain the variety of tasks that are created. A task librarian maintains the task library, which contains the list of all the standardized tasks across the organization. The librarian defines rules which cause tasks to be placed on task lists based on claim characteristics, dates which define when tasks are due, and task enablement through other applications.
Component Functionality
The key user interfaces for this component are:
-
- The Task Assistant: This is the utility that supports the population, execution, and historical tracking of tasks. It allows users to perform tasks, complete tasks, and remove tasks that have been automatically added.
- The Task Workplan: This user interface allows the user to strategize the plan for a specific claim. It shows tasks attached to their to respective levels of the claim including lines, participants, and the claim itself.
- Task Enablement Windows: There are many windows that can be added to enable task with other applications such as telephone support, forms and correspondence, and file notes. The number of potential task enablements is virtually limitless.
- Task Entry: Allows a user to add new task that weren't automatically added to the task list to cover situations where the claim handler wants to indicate work to be done that is not reflected by the standard task definitions in the task library.
Behind the functioning of the Task Assistant, the Task Engine continually evaluates messages sent from other components and determines based on the rules established by the task librarian, which tasks should be populated on the Task Assistant. Messages are sent to the Task Assistant when something significant occurs in another component. The messages contain the characteristics the Task Engine needs to evaluate in order to place the proper tasks on the task list.
User Interfaces
-
- Task Assistant
- Reassign Task
- Edit/Add Task
- Clear Task
- Mark Task In Error
- Build Workplan
- Participant Search
- Participant Phone Number
- Phone Task
- Personal Profile
- Account Search
- Organization Search
- Performer Search
Definition
The only interface the user sees to these components is the task library 1500, which allows task librarians 1502 to define the tasks and the rules that create them which are used by the Task Engine 1404. Working with these components is almost entirely a function performed by specialists who understand the complexity of the rules involved in ensuring events 1006 and tasks 1406 are handled properly.
The event processor 1400 also manages the communication and data synchronization between new claim components and LEGACY claim systems. This single point of contact effectively encapsulates the complex processes of translation and notification of events between the two systems.
Value
The automated determination of event responses provides enormous benefits to system users by reducing the maintenance they have to perform in ensuring the correct disposition of claims. Users trigger events by the data they enter and the system activities they perform, and the system automatically responds with appropriate automated activities like generating tasks.
The task generation rules defined in the Task Library provide an extremely flexible definition of claim handling processes limited only by the data available in the system on which task creation rules can be based. Process changes can be implemented quickly by task librarians, and enforced through the Task Assistant.
Key Users
Although all claim personnel directly benefit from the functioning of the event processor and task assistant, only specially trained users control the processing of these components. Task Librarians using the Task Library user interface handle the process of defining new tasks and the rules that trigger them in the Task Engine.
Operations personnel who ensure that all events are processed correctly and that the appropriate system resources are available to manage the throughput handle event processing.
Component Functionality
As shown in
The Task Engine 1404 follows a process of evaluating events 1006, determining claim characteristics, and matching the claim's characteristics to tasks defined in the Task Library 1500.
The key user interface for the Task Engine 1404 is the Task Library 1500. The Task Library 1500 maintains the templates that contain the fields and values with which tasks are established. A task template might contain statements like “When event=litigation AND line of business=commercial auto, then . . . ” Templates also identify what a tasks due date should be and how the task is enabled with other applications.
User Interfaces
-
- Search Task Template
- Search Triggering Templates
- Task Template Details
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Claims
1. A computerized method for automatically generating tasks to be performed in an insurance organization, said method comprising the steps of:
- monitoring with a server a transaction database comprising information relating to an insurance transaction;
- identifying with a processor an event associated with a change in said information relating to the insurance transaction;
- retrieving rules stored in a rules database in response to said identified event, said retrieved rules being associated with said identified event;
- transmitting characteristics related to the insurance transaction;
- determining a task to be completed based on said retrieved rules and based on matching the transmitted characteristics;
- assigning said task to an employee for completion;
- providing said task to a client component accessible by the assigned employee;
- displaying information associated with said task on a user interface on a display device;
- identifying said task as completed;
- recording the completion of said task in said transaction database;
- receiving at least one new rule into a library rules interface on the display device; and
- storing said at least one new rule in said rules database;
- identifying with said processor a new event associated with another change in said information relating to the insurance transaction;
- retrieving the new rule stored in the rules database in response to the identified new event, said retrieved new rule being associated with said new identified event.
- transmitting characteristics related to the insurance transaction;
- determining a new task to be completed based on said retrieved new rule and based on matching the transmitted characteristics.
Type: Application
Filed: Jan 21, 2010
Publication Date: Aug 12, 2010
Applicant: Accenture LLP (San Jose, CA)
Inventors: George V. Guyan (Bethlehem, PA), Robert H. Pish (Minneapolis, MN)
Application Number: 12/691,515
International Classification: G06Q 10/00 (20060101); G06Q 40/00 (20060101);