Method and apparatus for compliance checking in a trust-management system
A method and apparatus are provided for compliance checking in a trust-management system. A request r, a policy assertion (ƒ0, POLICY), and n−1 credential assertions (ƒ1, s1) . . . , (ƒn−1, sn-1) are received, each credential assertion comprising a credential function ƒi and a credential source si. Each assertion may be monotonic, authentic, and locally bounded. An acceptance record set S is initialized to {(&Lgr;, &Lgr;, R)}, where &Lgr; represents a distinguished null string, and R represents the request r. Each assertion (ƒi, si), where i represents the integers from n−1 to 0, is run and the result is added to the acceptance record set S. This is repeated mn times, where m represents a number greater than 1, and an acceptance is output if any of the results in the acceptance record set S comprise an acceptance record (0, POLICY, R).
[0001] The present application claims the benefit of U.S. provisional patent application Ser. No. 60/074,848 entitled “Compliance Checking in the Policy Maker Trust Management System” to Matthew A. Blaze, Joan Feigenbaum and Martin J. Strauss and filed on Feb. 17, 1998.
FIELD OF THE INVENTION[0002] The invention relates to trust-management systems. More particularly, the invention relates to a method and apparatus for compliance checking in a trust-management system.
COPYRIGHT NOTICE[0003] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION[0004] Emerging electronic commerce services that use public-key cryptography on a mass-market scale require sophisticated mechanisms for managing trust. For example, a service that receives a signed request for action may need to answer a basic question: “is the key used to sign this request authorized to take this action?” In some services, the question may be more complicated, requiring techniques for formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Matt Blaze, Joan Feigenbaum and Jack Lacy, “Decentralized Trust Management,” Proc. IEEE Conference on Security and Privacy (May 1996) (hereinafter “Blaze, Feigenbaum and Lacy”), the entire contents of which is hereby incorporated by reference, discloses such a trust-management problem as a component of network services and describes a general tool for addressing it, the “PolicyMaker” trust-management system.
[0005] As will be explained, the heart of the trust-management system is an algorithm for compliance checking. The inputs to the compliance checker are a “request,” a “policy” and a set of “credentials.” The compliance checker returns a “yes” (acceptance) or a “no” (rejection), depending on whether the credentials constitute a proof that the request complies with the policy. Thus, a central challenge in trust management is to find an appropriate notion of “proof” and an efficient algorithm for checking proofs of compliance.
[0006] Unfortunately, the compliance-checking problem may be mathematically undecidable in its most general form. Moreover, the compliance-checking problem is still non-deterministic polynomial time (NP) hard even when restricted in several natural ways.
[0007] Blaze, Feigenbaum and Lacy discloses the trust-management problem as a distinct and important component of security in network services. Aspects of the trust-management problem include formulation of policies and credentials, deferral of trust to third parties, and a mechanism for “proving ” that a request, supported by one or more credentials, complies with a policy. A comprehensive approach to trust management independent of the needs of any particular product or service is disclosed along with a trust-management system that embodies the approach.
[0008] In particular, the Policy Maker system comprises policies, credentials, and trust relationships that are expressed as functions or programs (or parts of programs) in a “safe” programming language. A common language for policies, credentials, and relationships makes it possible for applications to handle security in a comprehensive, consistent, and largely transparent manner.
[0009] The Policy Maker system is also expressive enough to support the complex trust relationships that can occur in large-scale network applications. At the same time, simple and standard policies, credentials, and relationships can be expressed succinctly and comprehensibly.
[0010] The Policy Maker system provides local control of trust relationships. Each party in the network can decide in each transaction whether to accept the credential presented by a second party or, alternatively, which third party it should ask for additional credentials. Local control of trust relationships, as opposed to a top-down centralized approach, eliminates the need for the assumption of a globally known, monolithic hierarchy of “certifying authorities.” Such hierarchies do not scale easily beyond single “communities of interest” in which trust can be defined unconditionally from the top down.
[0011] The Policy Maker mechanism for checking that a set of credentials proves that a requested action complies with local policy does not depend on the semantics of the application-specific request, credentials or policy. This allows different applications with varying policy requirements to share a credential base and a trust-management infrastructure.
[0012] Three examples of application-specific requests, and local policies with which they may need to comply, will now be described. Although individually the examples are of limited complexity, collectively they demonstrate that an expressive, flexible notion of “proof of compliance” is needed.
[0013] As a first example, consider an e-mail system in which messages arrive with headers that include, among other things, the sender's name, the sender's public key, and a digital signature. When a recipient's e-mail reader processes an incoming message, it uses the public key to verify that the message and the signature go together (i.e., an adversary has not spliced a signature from another message onto this message). The recipient may also be concerned about whether the name and public key go together. In other words, could an adversary have taken a legitimate message-signature pair that he produced with this own signing key and then attached to it his public key and someone else's name? To address this concern, the recipient needs a policy that determines which name-key pairs are trustworthy. Because signed messages may regularly arrive from senders that he has never met, a simple private database of name-key pairs may not be sufficient. By way of example, a plausible policy might include the following:
[0014] (1) He maintains private copies of the name-key pairs (N1, PK1) and (N2, PK2). A reasonable interpretation of this part of the policy is that he knows the people named N1 and N2 personally and can get reliable copies of the public keys directly from them.
[0015] (2) He accepts “chains of trust” of length one or two. An arc in a chain of trust is a “certificate” of the form (PKi, (Nj, PKj), S). This is interpreted to means that the owner Ni of PKi vouches for the binding between the name Nj and the public key PKj. This can also mean that Ni attests that Nj is trusted to provide certificates of this form. The party Ni signs (Nj, PKj) with his private key and the resulting signature S.
[0016] (3) He insists that there be two disjoint chains of trust from the keys that he maintains privately to the name-key pair that arrives with a signed message.
[0017] As a second example, consider a loan request submitted to an electronic banking system. Such a request might contain, among other things, the name of the requester and the amount requested. A plausible policy for approval of such loans might take the following form:
[0018] (1) Two approvals are needed from loans of less than $5,000. Three approvals are needed for loans of between $5,000 and $10,000. Loans of more than $10,000 are not handled by this automated loan-processing system.
[0019] (2) The head of the loan division must authorize approvers' public keys. The division head's public key is currently PK3. This key expires on Dec. 31, 1998.
[0020] As a third example, consider a typical request for action in a web-browsing system, such as “view URL http://www.research.att.com/.” In constructing a viewing policy, a user may decide what type of metadata, or labels, she wants documents to have before viewing them, and whom she trusts to label documents. The user may insist that documents be rated (S≦2, L≦2, V=0, N≦2) on the sex (S), language (L), violence (V) and nudity (N) scales, respectively. She may trust self-labeling by some companies or any labels approved by certain companies.
[0021] Previous work on “protection systems” is loosely related to the concept of a trust-management system. Recent work that is similarly related to the present invention can be found in, for example, T. Y. C. Woo and S. S. Lam, “Authorization in distributed Systems: A New Approach,” Journal of Computer Security 2 pp. 107-36 (1993). In addition, protection systems, as described by D. Denning, Cryptography and Data Security, Addison-Wesley, Reading (1982), address a similar, but not identical, problem.
[0022] M. A. Harrison, W. L. Ruzzo and J. D. Ullman, “Protection in Operating Systems,” Communications of the ACM 19, pp. 461-71 (1976) analyze a general protection system based on the “access matrix” model. In matrix A, indexed by subjects and objects, cell Aso records the rights of subject S over the object o; a set of transition rules describes the rights needed as preconditions to modify A and the specific ways in which A can be modified, by creating subjects and objects or by entering or deleting rights at a single cell. Harrison et al. showed that given (1) an initial state A0; (2) a set A of transition rules and (3) a right r, it is undecidable whether some sequence &dgr;i0 . . . &dgr;it &egr;&Dgr; transforms A0 such that &dgr;it enters r into a cell not previously containing r, i.e., whether it is possible for some subject, not having right r over some object, ever to gain that right. On the other hand, Harrison et al. identify several possible restrictions on &Dgr; and give decision algorithms for input subject to one of these restrictions. One restriction they consider yields a PSPACE-complete problem.
[0023] Independently, A. K. Jones, R. J. Lipton and L. Snyder, “A Linear Time Algorithm for Deciding Security, Proceedings of the Symposium on Foundations of Computer Science,” IEEE Computer Society Press, Los Alamitos, pp. 33-41 (1976) define and analyze “take-grant” directed-graph systems. Subjects and objects are nodes; an arc a from node n1 to n2 is labeled by the set of rights n1 has over n2. If subject n1 has the “take” right over n2, and n2 has some right r over n3, then a legal transition is for n1 to take right r over n3. Similarly, if the subject n1 has the “grant” right over n2, and n, has some right r over n3, then a legal transaction is for n1 to grant right r over n3 to n2. Besides these transitions, subjects can create new nodes and remove their 5 own rights over their immediate successors. Although rights are constrained to flow only via take-grant paths, take-grant systems do model non trivial applications.
[0024] Jones et al. asked whether a right r over a node x possessed by n1, but not possessed by n2, could ever be acquired by n2. They showed that this question can be decided in time linear in the original graph by depth-first search. Thus, Denning concludes that although safety in protection systems is usually undecidable, the results in, for example, Jones et al. demonstrate that safety can be decided feasibly in systems with sets of transition rules from a restricted though non-trivial set. The related results on compliance-checking described herein provide additional support for Denning's conclusion.
[0025] Having reviewed the basics of “protection systems,” it can be seen why they address a similar but not identical problem to the one addressed by the compliance-checking algorithm described herein. In the protection system world, there is a relatively small set of potentially dangerous actions that could ever be performed, and this set is agreed upon in advance by all parties involved. A data structure, such as an access matrix, records which parties are allowed to take which actions. This data structure is pre-computed offline, and, as requests for action arrive, their legitimacy is decided via a lookup operation in this data structure. “Transition rules” that change the data structure are applied infrequently, and they are implemented by a different mechanism and in a separate system module from the ones that handle individual requests for action.
[0026] In the trust-management system world, the set of potentially dangerous actions is large, dynamic, and not known in advance. A system provides a general notion of “proof of compliance” for use by diverse applications that require trust policies. The users of these applications and the semantics of their actions and policies are not even known to the compliance-checking algorithm; hence it is not possible for all parties to agree in advance on a domain of discourse for all potentially dangerous actions. The compliance-checking question “is request r authorized by policy P and credential set C?” is analogous to the question “can subject S eventually obtain right r by transition rules &Dgr;” in the protection system world. However, a single instance of request processing, especially one that involves deferral of trust, can require a moderately complex computation and not just a lookup in a pre-computed data structure. Accordingly, an embodiment of the present invention formalizes the complexity of a general-purpose, working system for processing requests of this nature. In summary, a general purpose trust-management system is, very roughly speaking, a meta-system in the protection system framework.
[0027] In addition, an application-independent notion of compliance checking can be useful and can enhance security. Any product or service that requires proof that a requested transaction complies with a policy could implement a special-purpose compliance checker from scratch. One important advantage of a general purpose compliance checker is the soundness and reliability of both the design and the implementation of the compliance checker. Formalizing the notion of “credentials proving that a request complies with a policy” involves subtlety and detail. It is easy to get wrong, and an application developer who sets out to implement something simple to avoid an “overly complicated” syntax of a general-purpose compliance checker is likely to find that: (1) she has underestimated the complexity of the application's needs for expressiveness and proof or (2) her special-purpose compliance checker is not turning out so simple.
[0028] A general-purpose notion of proof of compliance can be explained, formalized, proven correct, and implemented in a standard package, to free developers of individual applications from the need to reinvent the system. Applications that use a standard compliance checker can be assured that the answer returned for any given input (such as a request, a policy, and a set of credentials) depends on the input, and not on any implicit policy decisions (or bugs) in the design or implementation of the compliance checker. As policies and credentials become more diverse and complex, the issue of assuring correctness will become even more important, and modularity of function (with a clean separation between the role of the application and the role of the compliance checker) will make further development more manageable.
[0029] Two important sources of complexity that are often underestimated are delegation and cryptography. Products and services that need a notion of “credential” almost always have some notion of “delegation” of the authority to issue credentials. The simplest case, unconditional delegation, is easily handled by a special-purpose mechanism. However, if the product or service grows in popularity and starts to be used in ways that were not foreseen when originally deployed, delegation can quickly become more complex, and a special-purpose language that restricts the types of conditional delegation that the service can use may become an impediment to widespread and imaginative use.
[0030] The general framework for compliance checking avoids this by letting delegation be described by ordinary programs. Similarly, digital signatures and other browsers can be designed to accommodate “safe surfing” policies configurable by parents, but may not initially involve cryptographic functions. If the application is subsequently integrated into the wider world of electronic commerce, however, cryptography may be desired and cryptographic credentials, such as public-key certificates, may need to be incorporated into the application's notion of proof of compliance. If the application already uses a general-purpose notion of proof of compliance, this can be done without having to rethink and re-code the compliance-checker.
[0031] In addition, a general-purpose compliance checker can facilitate inter-operability. Requests, policies, and credentials, if originally written in the native language of a specific product or service, must be translated into a standard format understood by the compliance checker. Because a wide variety of applications will each have translators with the same target language, policies and credentials originally written for one application can be used by another. The fact that the compliance checker can serve as a locus of inter-operability may prove particularly useful in e-commerce applications and, more generally, in all setting in which publickey certificates are needed.
[0032] Another possible problem with a compliance-checking algorithm is the possibility of self-referencing assertions. For example, a digitally signed assertion by party A might represent “I approve this request if, and only if, party B approves this request” while an assertion by party B represents “I approve this request if, and only if, party A approves this request.” Although this request should perhaps be approved, a compliance-checking algorithm may not recognize this fact.
[0033] In view of the foregoing, it can be appreciated that a substantial need exists for a method, solvable in polynomial time and widely applicable, that checks the compliance of a request with a policy assertion based on credential assertions and solves the other problems discussed above.
SUMMARY OF THE INVENTION[0034] The disadvantages of the art are alleviated to a great extent by a method and apparatus for compliance checking in a trust-management system. A request r, a policy assertion (ƒ0, POLICY), and n−1 credential assertions (ƒ1, s1), (ƒn-1, Sn-1) are received, each credential assertion comprising a credential function ƒi and a credential source si. Each assertion may be monotonic, authentic, and locally bounded. An acceptance record set S is initialized to a set of the triple {(&Lgr;, &Lgr;, R)}, where A represents an empty portion of the acceptance record, and R represents the request r. Each assertion (ƒi, si), where i represents the integers from n−1 to 0, is run and the result is added to the acceptance record set S. This is repeated mn times, where m represents a number greater than 1, and an acceptance is output if any of the results in the acceptance record set S comprise an acceptance record (0, POLICY, R).
[0035] With these and other advantages and features of the invention that will become hereinafter apparent, the nature of the invention may be more clearly understood by reference to the following detailed description of the invention, the appended claims and to the several drawings attached herein.
BRIEF DESCRIPTION OF THE DRAWINGS[0036] FIG. 1 is a flow diagram of a method of compliance checking for a trust-management system according to an embodiment of the present invention.
[0037] FIG. 2 is a block diagram of a compliance checker for a trust-management system according to an embodiment of the present invention.
DETAILED DESCRIPTION[0038] The present invention is directed to a method and apparatus for compliance checking in a trust-management system. A general problem addressed by an embodiment of the present invention is Proof of Compliance (POC). The question is whether a “request” r complies with a “policy.” The policy is simply a function ƒ0 encoded in a programming system or language and labeled by, for example, a keyword such as “POLICY.” In addition to the request and the policy, a POC instance contains a set of “credentials,” which also include general functions. Policies and credentials are collectively referred to as “assertions.”
[0039] Credentials are issued by “sources.” Formally, a credential is a pair (ƒi, si) of function ƒi and source identifier (ID) si, which may be a string over some appropriate alphabet Π. Some examples of source IDs include public keys of credential issuers, URLs, names of people, and names of companies. In one embodiment of the present invention, with the exception of the keyword POLICY, the interpretation of source-IDs is part of the application-specific semantics of an assertion, and it is not the job of the compliance checker. From the compliance checker's point of view, the source-IDs are just strings, and the assertions encode a set of, possibly indirect and possibly conditional, trust relationships among the issuing sources. Associating each assertion with the correct source-ID is, according to this embodiment, the responsibility of the calling application and takes place before the POC instance is handed to the compliance checker.
[0040] The request r may be a string encoding an “action” for which the calling application seeks a proof of compliance. In the course of deciding whether the credentials (ƒ1, s1), . . . , (ƒn-1, sn-1) constitute a proof that R complies with the policy (ƒ0, POLICY), the compliance checker's domain of discourse may need to include other action strings. A request r may include, for example, a request to access or copy a data object, or to play a data object that contains, for example, audio content.
[0041] For example, if POLICY requires that r be approved by credential issuers s1 and s2, the credentials (ƒ1, s1) and (ƒ2, s2) may want a way to say that they approve r “conditionally,” where the condition is that the other credential also approve it. A convenient way to formalize this is to use strings R, R1 and R2 over some finite alphabet &Sgr;. The string R corresponds to the requested action r. The strings R1 and R2 encode conditional versions of R that might by approved by s1 and s2 as intermediate results of the compliance-checking procedure.
[0042] More generally, for each request r and each assertion (ƒi, si), there is a set {Rij} of “action strings” that might arise in a compliance check. By convention, there is a distinguished string R that corresponds to the input request r. The range of assertion (ƒi, si) is made up of “acceptance records” of the form (i, si, Rij), the meaning of which is that, based on the information at its disposal, assertion number i, issued by source si, approves action Rij. A set of acceptance records is referred to as an “acceptance set.” It is by maintaining acceptance sets and making them available to assertions that the compliance checker manages “inter-assertion communication,” giving assertions the chance to make decisions based on conditional decisions by other assertions. The compliance checker starts with an “initial acceptance set” {(&Lgr;, &Lgr;, R)}, in which the one acceptance record means that the action string for which approval is sought is R and that no assertions have yet signed off on it or anything else. The checker runs the assertions (ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1) that it has received as input, not necessarily in that order and not necessarily once each, to determine which acceptance records are produced. Ultimately, the compliance checker approves the request r if the acceptance record (0, POLICY, R), which means “policy approves the initial action string,” is produced. Note that the use of the string “POLICY” herein is by way of example only, and any other information may of course be used instead.
[0043] Thus, abstractly, an assertion is a mapping from acceptance sets to acceptance sets. Assertion (ƒi, si) looks at an acceptance set A encoding the actions that have been approved so far, and the numbers and sources of the assertions that approved them. Based on this information about what the sources it trusts have approved, (ƒi, si) outputs another acceptance set A′.
[0044] The most general version of the compliance-checking problem, or “proof of compliance,” is: given as input a request r and a set of assertions (ƒ0, POLICY), (ƒ1,s1),. . . , (ƒn-1, sn-1), is there a finite sequence i1, i2, . . . , it of indices such that each ij is in {0, 1, . . . , n−1}, but the ij's are not necessarily distinct and not necessarily exhaustive of {0, 1, . . . , n−1}, and such that:
(0, POLICY, R)&egr;(ƒit, s1t)∘ . . . ∘ (ƒt1, st1) ({(&Lgr;, &Lgr;, R)}),
[0045] where R is the action string that corresponds to the request r?
[0046] This general version of the problem is mathematically undecidable. A compliance checker cannot even decide whether an arbitrary assertion (ƒi, si) halts when given an arbitrary acceptance set as input, much less whether some sequence containing (ƒi, si) produces the desired output. Therefore, various special cases of POC will now be described, including one that is both useful and computationally tractable.
[0047] The statement “{(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒ, sn-1)} contains a proof that r complies with POLICY,” means that (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}) is a “yes-instance” of this unconstrained, most general form of POC. If F is a, possibly proper, subset of {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} that contains all of the assertions that actually appear in the sequence (ƒit, sit) ∘ . . . ∘ (ƒt1, si1), then “F contains a proof that r complies with POLICY.”
[0048] In order to obtain a useful restricted version of POC, various pieces of information may be added to the problem instances. Specifically, the instance (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}) may be augmented in one or more of the following ways.
[0049] Global Run Time Bound
[0050] An instance may contain an integer d such that a sequence of assertions (ƒi1, si1), . . . , (ƒit, sit) is considered a valid proof that r complies with POLICY if the total amount of time that the compliance checker needs to compute (ƒit, sit) ∘ . . . ∘ (ƒi1, si1) ({(&Lgr;, &Lgr;, R)}) is O(Nd). Here N is the length of the original problem instance, i.e., the number of bits needed to encode r, (ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1), and d in some standard fashion.
[0051] Local Run Time Bound
[0052] An instance may contain an integer c such that (ƒt1, si1) . . . , (ƒit, sit) is considered a valid proof that R complies with POLICY if each (ftj, sij) runs in time O(Nc). Here N is the length of the actual acceptance set that is input to (ƒij, sij) when it is run by the compliance checker. Note that the length of the input fed to an individual assertion (ƒij, sij) in the course of checking a proof may be considerably bigger than the length of the original problem instance (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c), because the running of assertions (ƒi1, si1), . . . , (ƒij-1, sij-1) may have caused the creation of many new acceptance records.
[0053] Bounded Number of Assertions in a Proof
[0054] An instance may contain an integer 1 such that (ƒit, si1), . . . , (ƒ1t, sit) is considered a valid proof if t≦1.
[0055] Bounded Output Set
[0056] An instance may contain integers m and S such that an assertion (ƒi, si) can be part of a valid proof that r complies with POLICY if there is a set Oi={Ri1, . . . , Rim} of m action strings, such that (ƒi, si)(A)Oi for any input set A , and the maximum size of an acceptance record (i, si, Rij) is S. Intuitively, for any user-supplied request r, the meaningful “domain of discourse” for assertion (fi, si) is of size at most m—there are at most m actions that it would make sense for (ƒi, si) to sign off on, no matter what the other assertions in the instance say about r.
[0057] Monotonicity
[0058] Other variants of POC are obtained by restricting attention to instances in which the assertions have the following property: (ƒi, si) is “monotonic” if, for all acceptance sets A and B, AB→(ƒi, si)(A)(ƒi, si)(B). Thus, if (ƒi, si) approves action Rij when given a certain set of “evidence” that Ri is ok, it will also approve Rij when given a super-set of that evidence—it does not have a notion of “negative evidence.”
[0059] Any of the parameters l, m, and S that are present in a particular instance may be written in unary so that they play an analogous role to n, the number of assertions, in the calculation of the total size of the instance. The parameters d and c are exponents in a run time bound and hence may be written in binary.
[0060] Any subset of the parameters d, c, l, m, and S may be present in a POC instance, and each subset defines a POC variant. Including a global run time bound d makes the POC problem decidable, as does including parameters c and l.
[0061] In stating and proving results about the complexity of POC, the notion of a “promise problem,” as discussed in S. Even, A. Selman and Y. Yacobi, the “Complexity of Promise Problems with Applications to Public-Key Cryptography,” Information and Control 61, pp. 159-174 (1984), may be used. In a standard decision problem, a language L is defined by a predicate R in that x&egr;LR(x). In a promise problem, there are two predicates, the promise Q and the property R. A machine M solves the promise problem (Q, R) if, for all inputs for which the promise holds, the machine M halts and accepts x if and only if the property holds. Formally, ∀x[Q(x)→[M halts on x and M(x) acceptsR(x)]]. Note that M's behavior is unconstrained on inputs that do not satisfy the promise, and each set of choices for the behavior of M on these inputs determines a different solution. Thus, predicates Q and R define a family of languages, namely all L such that L=L(M) for some M that solves (Q, R).
[0062] The class NPP consists of all promise problems with at least one solution in NP. A promise problem is NP-hard if it has at least one solution and all of its solutions are NP-hard. To prove that a promise problem (Q, R) is NP-hard, it suffices to start with an NP-hard language L and construct a reduction whose target instances all satisfy the promise Q and satisfy the property R if and only if they are images of strings in L.
[0063] The following are POC variants that can be shown to be NP-hard, which is generally interpreted to mean that they are computationally intractable in the worst case.
[0064] Locally Bounded Proof of Compliance (LBPOC)
[0065] In this case, the “input” is a request r, a set {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} of assertions, and integers c, l, m, and S. The “promise” is that each (ƒi, si) runs in time O(Nc). On any input set that contains (&Lgr;, &Lgr;, R), where R is the action string corresponding to request r, for each (ƒi, si) there is a set Oi of at most m action strings such that (fi, si) only produces output from Oi, and S is the maximum size of an acceptance record (i, si, Rij), where Rij&egr;Oi. Finally, the “question” can be stated as follows: is there a sequence i1, . . . , it of indices such that:
[0066] 1. Each ij is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of {0, 1, . . . , n−1};
[0067] 2. t≦1; and
[0068] 3. (0, POLICY, R)&egr; (ƒit, si1) ∘ . . . ∘ (ƒi1, si1) ({(&Lgr;, &Lgr;, R)})?
[0069] Globally Bounded Proof of Compliance (GBPOC)
[0070] In this case, the “input” is a request r, a set {(ƒ0, POLICY), (f1, s1), . . . , (ƒn-1, sn-1)} of assertions, and an integer d. The “question” can be stated as follows: is there a sequence i1, . . . , it of indices such that:
[0071] 1. Each ij is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of{0, 1, . . . , n−1};
[0072] 2. (0, POLICY, R)&egr; (ƒit, s1t) ∘ . . . ∘ (ƒit, sit) ({(&Lgr;, &Lgr;, R)}), where R is the action string corresponding to request r, and;
[0073] 3. The computation of (ƒtt, sit) ∘ . . . ∘ (ƒit, si1) ({(&Lgr;, &Lgr;, R)}) runs in total time O(Nd)?
[0074] Monotonic Proof of Compliance (MPOC)
[0075] In this case, the “input” is a request r, a set {(ƒo, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1))} of assertions, and integers l and c. The “promise” is that each assertion (ƒi, si) is monotonic and runs in time O(Nc). The “question” can be stated as follows: is there a sequence ij, . . . , it of indices such that:
[0076] 1. Each ij is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of {0, 1, . . . , n−1};
[0077] 2. t≦1; and
[0078] 3. (0, POLICY, R)&egr; (ƒt1, si1) ∘ . . . ∘ (ƒt1, s11) ({(&Lgr;, &Lgr;, R)}), where R is the action string corresponding to request r?
[0079] Each version of POC may be defined using “agglomeration” (ƒ2, s2)★(ƒ1, st) instead of composition (ƒ2, s2)∘(ƒ1, s1). The result of applying the sequence of assertions (ƒt1, sit), . . . , (ƒ1t, sit) agglomeratively to an acceptance set S0 is defined inductively as follows: SI≡(ƒi1, si1)(S0)∪S0 and, for 2≦i≦t, Sj≡ (ƒtj, s1j) (Sj-1)∪Sj-1.(Si-,) u Se 1 Thus, for any acceptance set A, A (ƒ1t, sit) ★ . . . ★ (ƒi1, si1) (A). The agglomerative versions of the decision problems are identical to the versions already given, except that the acceptance condition is “(0, POLICY, R) &egr; (ƒit, sit) ★ . . . ★ (ƒit, sit) ({(&Lgr;, &Lgr;, R)})?” As used herein, “agglomerative POC,” “agglomerative MPOC,” etc., refer to the version defined in terms of ★ instead of ∘.
[0080] A trust-management system that defines “proof of compliance” in terms of agglomeration can make it impossible for an assertion to “undo” an approval that it (or any other assertion) has already given to an action string during the course of constructing a proof. This definition of proof may make sense if the trust-management system should guard against a rogue credential-issuer's ability to thwart legitimate proofs. Note that the question of whether the compliance checker combines assertions using agglomeration or composition is separate from the question of whether the assertions themselves are monotonic.
[0081] A compliance-checking algorithm according to a preferred embodiment of the present invention will now be described. A specific case of a POC problem associated with this embodiment will be explained. The promise that defines this special case includes some conditions that have already been discussed, namely monotonicity and bounds on the run time of assertions and on the total size of acceptance sets that assertions can produce. According to one embodiment of the present invention, however, another condition is considered, called “authenticity,” which could be ignored when proving hardness results. An authentic assertion (ƒi, si) produces acceptance records of the form (i, si, Rij). That is, it does not “impersonate” another assertion by producing an acceptance record of the form (i′, si′, Ri′j), for i′ not equal to i, or si′ not equal to si.
[0082] An embodiment of the present invention constructs proofs in an agglomerative fashion, and hence ★ is used in the following problem statement. Note that a variant of POC could be defined using ∘ as well.
[0083] Locally Bounded, Monotonic, and Authentic Proof of Compliance (LBMAPOC)
[0084] According to this embodiment of the present invention, the “input is a request r, a set {(ƒ0, POLICY), (ƒi, s1), . . . , (ƒn-1, sn-1)} of assertions, and integers c, m, and S. The “promise” is that each (ƒi, si) is monotonic, authentic, and runs in time O(Nc). On any input set that contains (&Lgr;, &Lgr;, R), where R is the action string corresponding to request r, for each (ƒi, si) there is a set Oi of at most m action strings such that (ƒi, si) produces output from Oi. Moreover, S is the maximum size of an acceptance record (i, si, Rij), such that Rij&egr;Oi. Finally, the “question” can be stated as follows: is there a sequence i1, . . . , it of indices such that each ij is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of {0, 1, . . . , n−1}, and (0, POLICY, R)&egr; (ƒit, sit) ★ . . . ★ (ƒt1, si1) ({(&Lgr;, &Lgr;, R)})?
[0085] Referring now in detail to the drawings wherein like parts are designated by like reference numerals throughout, there is illustrated in FIG. 1 a flow diagram of a method of compliance checking for a trust-management system according to an embodiment of the present invention. The flow chart in FIG. 1 is not meant to imply a fixed order to the steps; embodiments of the present invention can be practiced in any order that is practicable. At step 110, a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1) are received, each credential assertion comprising a credential function ƒi and a credential source si. In addition, an acceptance record set S is initialized to {(&Lgr;, &Lgr;, R)} at step 110, where A represents a distinguished “null string” and R represents the initial request, r.
[0086] At step 120, j is initialized to 1. At step 130 each assertion (ƒi, si), for integers i from 0 to n−1, is run and the result is added to the acceptance record set S. If j does not equal mn at step 140, where m is a number greater than 1, j is increased by 1 at step 150 and step 130 is repeated.
[0087] If j does equal mn at step 140, it is determined if acceptance set S contains an acceptance record, such as (0, POLICY, R), at step 160. If not, a rejection is output at step 170. If acceptance set S does contain the acceptance record, an acceptance is output at step 180.
[0088] The following pseudo-code demonstrates the algorithm according to one embodiment of the present invention, referred to herein as the “Compliance-Checking Algorithm version 1” (CCA1):
[0089] CCA1(r, {(ƒ0, POLICY), (ƒ1, s1) . . . , (ƒn-1, sn-1)}, m):
[0090] {
[0091] S←{(&Lgr;,&Lgr;,R)}
[0092] I←{}
[0093] For j←1 to mn
[0094] {
[0095] For i←n−1 to 0
[0096] {
[0097] If (ƒi, si) &egr;I, Then S′←(ƒi, si)(S)
[0098] If IIIFormed((ƒi, si)), Then I←I∪{(ƒi, si)}, Else S←S∪S′
[0099] }
[0100] {
[0101] If (0, POLICY, R) &egr;S, Then Output(Accept), Else Output(Reject)
[0102] }
[0103] Note that an assertion (ƒi, si) is “ill-formned” if it violates the promise. If CCA1 discovers that (ƒi, si) is ill-formed, the assertion is ignored for the remainder of the computation. An assertion (ƒi, si) may be undetectably ill-formed. For example, there may be sets AB such that (ƒi, si)(A)(ƒi, si)(B), but such that A and B do not arise in this run of the compliance checker. The CCA1 algorithm may check for violations of the promise every time it simulates an assertion. Detailed pseudo-code for these checks is not included in CCA1, because it would not illustrate the basic structure of the algorithm. Instead, the predicate Ill-Formed () indicates that the checks may done for each simulation.
[0104] Like the non-deterministic algorithms discussed above, CCA1 accepts if and only if the acceptance record (0, POLICY, R) is produced when it simulates the input assertions. Unlike the previous algorithms, however, it cannot non-deterministically guess an order in which to do the simulation. Instead, it uses an arbitrary order. CCA1 also ensures that, if a proper subset F of the input assertions contains a proof that R complies with POLICY and every (ƒi, si) &egr; F satisfies the promise, then the remaining assertions do not destroy all or part of the acceptance records produced by F during the simulation (and destroy the proof), even if these remaining assertions do not satisfy the promise. CCA1 achieves this by maintaining one set of approved acceptance records, from which no records are ever deleted, i.e., by agglomerating, and by discarding assertions that it discovers are ill-formed.
[0105] Note that CCA1 does mn iterations of the sequence (ƒn-1, sn-1), . . . , (ƒ1, s1), (ƒ0, POLICY), for a total of mn2 assertion-simulations. Recall that a set F={(ƒjt, sjt), . . . , (ƒjt, sjt)}{(ƒ0, POLICY), . . . , (ƒn-1, sn-1)}“contains a proof that r complies with POLICY” if there is some sequence k1, . . . , ku of the indices j1, . . . , jt, not necessarily distinct and not necessarily exhaustive of j1, . . . , jt, such that (0, POLICY, R)&egr; (ƒku, sku) ★ . . . ★ (ƒk1, sk1) ({(&Lgr;, &Lgr;, R)}).
[0106] Let (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c, m, s) be an agglomerative LBMAPOC instance. As a result:
[0107] 1. Suppose that F{(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} contains a proof that R complies with POLICY and that every (ƒi, si)&egr;F satisfies the promise of LBMAPOC. Then CCA1 accepts (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c, m, s).
[0108] 2. If {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} does not contain a proof that R complies with POLICY, then CCA1 rejects (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c, m, s).
[0109] 3. CCA1 runs in time O(mn2(nms)c).
[0110] The only non trivial claim above is (1). Let F={(ƒjt, sjt) . . . , (ƒjt, sjt)} be a set that satisfies the hypothesis of (1). Each assertion in F is monotonic, and, as CCA1 runs assertions agglomeratively, it never deletes acceptance records that have already been produced but rather just adds new ones. Therefore, it may be assumed without loss of generality that F contains all of the well-formed assertions in {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}.
[0111] Let k1, . . . , ku be a sequence of indices, each in {j1, . . . , jt}, but not necessarily distinct and not necessarily exhaustive of {(j1, . . . , jt}, such that (0, POLICY, R)&egr; (ƒku, sku) ★ . . . ★ (ƒkw, skw) ({(&Lgr;, &Lgr;, R)}). Assume without loss of generality that no sequence of length less than u has this property. Let A1, . . . , Au be the acceptance sets produced by applying (ƒk1, sk1), . . . , (ƒkw, skw). Because k1, . . . , ku is a shortest sequence that proves compliance using assertions in F, each set Ap must contain at least one action string that is not present in any of A1, . . . , Ap-1. Thus, u iterations of (ƒ0, POLICY)★(ƒ1, s1) ★ . . . ★ (ƒn-1, sn-1) would suffice for CCA1. At some point in the first iteration (ƒk1, sk1) would be run, and because CCA1 adds but never deletes acceptance records, A1 or some super-set of A1 would be produced. At some point during the second iteration, (ƒk2, sk2) would be run, and because A1 would be contained in its input, A2 or some superset of A2 would be produced.
[0112] Each (ƒjh, sjh) &egr;F satisfies the local boundedness promise, producing at most m distinct action strings in any computation that begins with {(&Lgr;, &Lgr;, R)}, regardless of the behavior of other (even ill-formed) assertions. Because |F|=t≦n, at most mn distinct action strings can be produced by assertions in F, and at most mn sets Ap can be produced if each is to contain a record that is not contained in any earlier set. Thus, u≦mn, and mn iterations of (ƒ0, POLICY) ★ (ƒ1, s1) ★ . . . ★ (ƒn-1, sn-1) suffice.
[0113] Note that cases (1) and (2) do not cover all possible inputs to CCA1. There may be a subset F of the input assertions that does contain a proof that r complies with POLICY but that contains one or more ill-formed assertions. If CCA1 does not detect that any of these assertions is ill- formed, because their ill-formedness is exhibited on acceptance sets that do not occur in this computation, then CCA1 will accept the input. If it does detect ill-formedness, then, as specified here, CCA1 may or may not accept the input, perhaps depending on whether the record (0, POLICY, R) has already been produced at the time of detection. According to another embodiment of the present invention, CCA1 is modified to restart whenever ill-formedness is detected, after discarding the ill-formed assertion so that it is not used in the new computation. The point is simply that CCA1 should not be given a policy that trusts, directly or indirectly, a source of ill-formed assertions. Therefore, the policy author should know which sources to trust, and modify the policy if a trusted source issues ill-formed assertions.
[0114] FIG. 2 is a block diagram of a compliance checker for a trust-management system according to an embodiment of the present invention. An application 210 running on a user device 200 sends a request r to a trust management platform input port 410 through a communication network 300 such as, for example: a Local Area Network (LAN), the Public Switched Telephone Network (PSTN), an intranet, an extranet or the Internet. A compliance-checking unit 450 coupled to the input port 410 receives the request along with a policy assertion (ƒ0, POLICY) associated with the request and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1), each credential assertion including a credential function fi and a credential source si. Note that the input port 410 may be a single physical input port, or several different input ports that may in turn be coupled to different networks or other devices. That is, the request, policy and credentials may not come from the same source or through the same channel.
[0115] The input port 410 is coupled to a compliance-checking unit 450, which may comprise, for example, the following (not shown in FIG. 2): a processing module with a Central Processing Unit (CPU); “memories” comprising a Random Access Memory (RAM) and a Read Only Memory (ROM); and a storage device. The memories and the storage device may store instructions adapted to be executed by the CPU to perform at least one embodiment of the method of the present invention. For the purposes of this application, the memories and storage device could include any medium capable of storing information and instructions adapted to be executed by a processor. Some examples of such media include, but are not limited to, floppy disks, CD-ROM, magnetic tape, hard drives, and any other device that can store digital information. In one embodiment, instructions are stored on the medium in a compressed and/or encrypted format. As used herein, the phrase “adapted to be executed by a processor” is meant to encompass instructions stored in a compressed and/or encrypted format, as well as instructions that have to be compiled or installed by an installer before being executed by the processor.
[0116] The compliance-checking unit 450 initializes an acceptance record set S to {(&Lgr;, &Lgr;, R)}, where &Lgr; represents a distinguished null string and R represents the request r. The compliance-checking unit 450 runs assertion (ƒi, si) for integers i from 0 to n−1 and adds the result of each assertion (ƒi, si) to the acceptance record set S. This process is repeated mn times, where m represents a number greater than 1. The compliance-checking unit 450 may output an “acceptance,” such as through port 410, or some other communication port, if any of the results in the acceptance record set S comprise an acceptance record (0, POLICY, R). The compliance-checking unit 450 may instead, according to another embodiment of the present invention, perform the action R itself.
[0117] Thus, according to one embodiment of the present invention, the PolicyMaker system uses a notion of “proof that a request complies with a policy” that is amenable to definition and analysis. The choice of this notion of proof, however, is a subjective one and other notions of proof may also be used.
[0118] In deciding how a set of executable assertions can cooperate to produce a proof, a mechanism for “inter-assertion communication” of intermediate results may be used. For simplicity, assertions may communicate by outputting acceptance records that are input to other assertions. More sophisticated interactions, such as allowing assertions to call each other as subroutines, might be useful but may require a more complex execution environment. A trade-off might therefore exist between the cost of building and analyzing such an execution environment and the potential power to be gained by using more sophisticated interactions to construct proofs of compliance.
[0119] The choice of a simple communication mechanism implies that a part of constructing a proof of compliance is choosing an order in which to execute assertions. According to an embodiment of the present invention, the responsibility of choosing this order rests with the compliance checker and not, for example, the calling application. Although the compliance checker's job could be made easier by requiring the calling application to give it the correct order as an input, such a requirement may not be consistent with the system's overall goals. For example, applications may need to use credentials issued by diverse and far-flung sources without having to make assumptions about the order in which these credentials communicate via acceptance records. In an extreme case, the issuing sources may not be aware of each others' existence, and no such assumptions by the calling application would be valid. Although the most general version of the POC problem allows assertions to be arbitrary functions, the computationally tractable version may only be correct when all assertions are monotonic.
[0120] In particular, according to one embodiment of the present invention, monotonic policy assertions may produce a correct result, and this excludes certain types of policies that are used in practice, including those that use “negative credentials” such as revocation lists. Despite this restriction, the monotonicity requirement has certain advantages. Although the compliance checker may not handle all potentially desirable policies, it is at least analyzable and provably correct on a well-defined class of policies. Furthermore, the requirements of many non-monotonic policies can often be achieved by monotonic policies. For example, instead of requiring that an entity not appear on a revocation list, the system may require a “certificate of non-revocation.” The choice between these two approaches involves trade-offs among the (system-wide) costs of the two kinds of credentials and the benefits of a standard compliance checker with provable properties. Moreover, restriction to monotonic assertions encourages a conservative, prudent approach to security. In order to perform a potentially dangerous action, a user must present an adequate set of affirmative credentials. Potentially dangerous action are not allowed “by default,” simply because of the absence of negative credentials.
[0121] According to an embodiment of the present invention, the POC problem has been formulated in a way that allows assertions to be as expressive as possible. As a result, well-formedness promises such as monotonicity and boundedness, while formal and precise, may not be verified. Each assertion that conditionally trusts an assertion source for application-specific expertise (such as suitability for a loan) must also trust that source to write bounded and monotonic assertions and to trust other similar sources of assertions. The resulting notion of soundness is that if there is no proof from a set of trusted, well-formed assertions, then CCA1 will not accept the input.
[0122] Full expressiveness, however, is just one goal of a trust-management system. Another goal is the clear separation of the trust relationships of assertions from programming details. To some extent, these goals are at odds—the compliance checker may not perform verifications on fully general programs, and thus an assertion writer may need to worry about some programming details.
[0123] Note that monotonic assertions may actually be written as, for example, AND-OR circuits and bounded assertions may actually “declare” the finite set from which they will produce output. A compliance-checking algorithm could then easily detect the ill-formed assertions and discard them. This would free assertion writers of the burden of deciding when another writer is trusted to write bounded and monotonic code, just as requiring assertions to be written in a safe (and therefore restricted) language frees the assertion writer from worrying about certain application-independent programming details. This verifiability comes at a price: listing a finite output set is relatively inexpensive, but there are monotonic functions that require exponentially bigger circuits to express over a basis of AND and OR than they require over a basis of AND, OR, and NOT. See, E. Tardos, “The Gap Between Monotone and Non-monotone Circuit Complexity is Exponential,” Combinatorica 8, pp. 141-142 (1988). In some applications it may be cheaper, on average, to write assertions that are verfiably bounded and monotonic than to determine the set of sources trusted (even indirectly) by a given assertion and to judge whether they are trusted to be monotonic and bounded.
[0124] According to another embodiment of the present invention, the compliance checker makes the original code of an assertion that produced a record available to other assertions reading that acceptance record. A conservative policy then, before trusting assertions (ƒ1, s1) and (ƒ2, s2), could require and check that ƒ1 and ƒ2 be verifiably monotonic and bounded and that ƒ1 and ƒ2 each include specific standard code to check all assertions whose acceptance records (ƒ1, s1) and (ƒ2, s2) wish to trust. A complex monotonic assertion that needs to be written compactly using NOT gates can, if desired, still be used with the modified compliance algorithm.
[0125] Although various embodiments are specifically illustrated and described herein, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention. For example, although specific pseudo-code was used to describe one embodiment of the present invention, it will be understood that other compliance-checking algorithms will also fall within the scope of the invention.
Claims
1. A method of compliance checking in a trust-management system, comprising:
- a) receiving a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1),..., (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si;
- b) initializing an acceptance record set S to {(&Lgr;, &Lgr;, R)}, where &Lgr; represents an empty portion of the acceptance record set S, and R represents the request r;
- c) running assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and adding the result of each assertion (ƒi, si) to the acceptance record set S;
- d) repeating step (c) mn times, where m represents a number greater than 1; and
- e) determining if the acceptance record set S includes (0, POLICY, R).
2. The method of
- claim 1, further comprising:
- f) determining whether an assertion (ƒi, si) is ill-formned;
- wherein step (c) is only performed for assertions (ƒi, si) that are not ill-formed.
3. The method of
- claim 2, further comprising:
- g) initializing a set I to an empty set; and
- h) adding any ill-formed assertions (ƒi, si) to set I.
4. The method of
- claim 1, wherein a request r is a request to access a data object.
5. The method of
- claim 1, wherein a request r is a request to make a copy of a data object.
6. The method of
- claim 1, wherein a request r is a request to play a data object that includes audio content.
7. The method of
- claim 1, wherein a credential function includes a subject, an action, and an object.
8. The method of
- claim 1, wherein the request r is a string encoding an action for which a calling application seeks a proof of compliance.
9. The method of
- claim 1, wherein R represents an action string corresponding with the request r.
10. The method of
- claim 9, wherein the action string R includes a subject, an action and an object.
11. The method of
- claim 1, wherein a credential assertion includes one of a public key, a uniform resource locator and a name.
12. The method of
- claim 1, wherein credential function ƒi is correlated with a credential source si by cryptographically signing the credential function ƒi with a private cryptographic key belonging to credential source si.
13. The method of
- claim 1, wherein each assertion is monotonic, authentic, and locally bounded.
14. A method of compliance checking in a trust-management system, comprising:
- a) receiving a request;
- b) receiving a policy associated with the request;
- c) receiving a number of credentials, the policies and credentials comprising a number of monotonic, authentic, and locally bounded assertions; and
- d) deciding whether the credentials prove that the request complies with the policy.
15. The method of
- claim 14, wherein a monotonic assertion approves an action when provided with a set of evidence if the assertion would approve the action when provided with a subset of that evidence.
16. The method of
- claim 14, wherein an authentic assertion produces acceptance records that do not impersonate another assertion.
17. The method of
- claim 14, wherein a locally bounded assertion is bounded in terms of a maximum runtime and a maximum size of acceptance sets that can be produced.
18. The method of
- claim 14, wherein the policy comprises a function ƒ0 encoded in a programming system
19. A method of compliance checking in a trust-management system, comprising:
- receiving (i) a request r to perform an action R and (ii) assertions (ƒ0, POLICY), (ƒ1, si),..., (ƒn-1, sn-1);
- executing, mn times, assertion (ƒi, si) for each integer i from n−1 to 0, the execution being performed using any information generated by previously executed assertions, m representing a number greater than 1; and
- determining if (0, POLICY, R) has been generated.
20. An apparatus for compliance checking in a trust-management system, comprising:
- a processor; and
- a memory storing instructions adapted to be executed by said processor to receive a request R to perform an action and assertions (ƒ0, POLICY), (ƒ1, s1),..., (ƒn-1, sn-1) initialize an acceptance record set S to {(&Lgr;, &Lgr;, R)}, where A represents a distinguished null string, iteratively run, mn times, assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and add the result of each assertion (ƒi, si) to the acceptance record set S, where m represents a number greater than 1, and determine if the acceptance record set S includes (0, POLICY, R).
21. A trust management platform, comprising:
- an input port configured to receive a request, a policy assertion (ƒ0, POLICY), and credential assertions (ƒ1, s1),..., (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si; and
- a compliance checking unit coupled to said input port and configured to:
- a) initialize an acceptance record set S to {(&Lgr;, &Lgr;, R)}, where &Lgr; represents a distinguished null string and R represents information corresponding with the request,
- b) run assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and add the result of each assertion (ƒi, si) to the acceptance record set S,
- c) repeat step (b) mn times, where m represents a number greater than 1, and
- d) determine if acceptance record set S includes an acceptance record (0, POLICY, R).
22. A trust-management system, comprising:
- means for receiving a request to perform an action r and a set of assertions (ƒ0, POLICY), (ƒ1, s1),..., (ƒn-1, sn-1); and
- means for proving that the request r is consistent with the set of assertions.
23. A medium storing instructions adapted to be executed by a processor to perform steps including:
- a) receiving a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1),..., (ƒn-1, sn-1) each credential assertion comprising a credential function ƒi and a credential source si;
- b) initializing an acceptance record set S to {(&Lgr;, &Lgr;, R)}, where &Lgr; represents a distinguished null string and R represents the request r,
- c) running assertion (fi, si) on the acceptance set S for each integer i from n−1 to 0 and adding the result of each assertion (ƒi, si) to the acceptance record set S;
- d) repeating step (c) mn times, where m represents a number greater than 1; and
- e) determining whether the acceptance record set S includes (0, POLICY, R).
24. A method of compliance checking in a trust-management system, comprising:
- a) receiving a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1),..., (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si;
- b) initializing an acceptance record set S to {(&Lgr;, &Lgr;, R)}, where &Lgr; represents a distinguished null string and R represents the request r;
- c) for each integer i from n−1 to 0:
- running assertion (ƒi, si) against the acceptance set S and adding the result to the acceptance record set S,
- determining if the acceptance record set includes (0, POLICY, R), and
- if the acceptance record set includes (0, POLICY, R), then stopping said method; and
- d) repeating step (c) mn times, where m represents a number greater than 1.
25. A method of compliance checking in a trust-management system, comprising:
- a) receiving credential assertions (ƒ1, s1),..., (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si;
- b) initializing an acceptance record set S to {(&Lgr;, R) }, where &Lgr; represents an empty portion of the acceptance record set S, and R represents a request;
- c) running assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and adding the result of each assertion (ƒi, si) to the acceptance record set S;
- d) repeating step (c) mn times, where m represents a number greater than 1.
Type: Application
Filed: Feb 9, 2001
Publication Date: Aug 30, 2001
Inventors: Matthew A. Blaze (New York, NY), Joan Feigenbaum (New York, NY), Martin J. Strauss (Summit, NJ)
Application Number: 09780892
International Classification: G06F017/60;