VERONICA: Expressive and Precise Concurrent Information Flow Security (Extended Version with Technical Appendices)
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
V ERONICA: Expressive and Precise Concurrent Information Flow Security (Extended Version with Technical Appendices) Daniel Schoepe∗ , Toby Murray† and Andrei Sabelfeld∗ ∗ Chalmers University of Technology † University of Melbourne and Data61 Abstract—Methods for proving that concurrent software does By expressiveness, we mean the ability to reason about arXiv:2001.11142v1 [cs.LO] 30 Jan 2020 not leak its secrets has remained an active topic of research the enforcement of a wide variety of security policies and for at least the past four decades. Despite an impressive array classes thereof, such as state-dependent secure declassi- of work, the present situation remains highly unsatisfactory. fication and data-dependent sensitivity. It is well estab- With contemporary compositional proof methods one is forced lished that, beyond simple noninterference [34] (“secret data to choose between expressiveness (the ability to reason about a should never be revealed in public outputs”), there is no uni- wide variety of security policies), on the one hand, and precision versal solution to specifying information flow policies [13], (the ability to reason about complex thread interactions and and that different applications might have different interpre- program behaviours), on the other. Achieving both is essential tations on what adherence to a particular policy means. and, we argue, requires a new style of compositional reasoning. By precision, we mean the ability to reason about We present V ERONICA, the first program logic for proving complex thread interactions and program behaviours. This concurrent programs information flow secure that supports includes not just program behaviours like secret-dependent compositional, high-precision reasoning about a wide range of branching that are beyond the scope of many existing proof security policies and program behaviours (e.g. expressive de- methods (e.g. [10], [19], [28]). Moreover, precision is aided classification, value-dependent classification, secret-dependent by reasoning about each thread under local assumptions that it makes about the behaviour of the others [30], [35]. branching). Just as importantly, V ERONICA embodies a new For instance [10], suppose thread B receives data from approach for engineering such logics that can be re-used thread A, by acquiring a lock on a shared buffer and then elsewhere, called decoupled functional correctness (DFC). DFC checking the buffer contents. Thread B relies on thread A leads to a simple and clean logic, even while achieving this having appropriately labelled the buffer to indicate the (data- unprecedented combination of features. We demonstrate the dependent) sensitivity of the data it contains and, while virtues and versatility of V ERONICA by verifying a range of thread B holds the lock, it relies on all other threads to example programs, beyond the reach of prior methods. avoid modifying the buffer (to preserve the correctness of 1. Introduction the sensitivity label). Precise reasoning here should take account of these kinds of assumptions when reasoning about Software guards our most precious secrets. More often thread B and, correspondingly, should prove that they are than not, software systems are built as a collection of adhered to when reasoning about thread A. concurrently executing threads of execution that cooperate Besides expressiveness and precision, another useful to process data. In doing so, these threads collectively property for a proof method to have is compositionality. We implement security policies in which the sensitivity of the say that a proof method is compositional [36]–[39] when it data being processed is often data-dependent [1]–[10], and can be used to establish the security of the entire concurrent the rules about to whom it can be disclosed and under what program by using it to prove each thread secure separately. conditions can be non-trivial [11]–[16]. The presence of So far it has remained an open problem of how to design concurrency greatly complicates reasoning, since a thread a proof method (e.g. a security type system [40] or program that behaves securely when run in isolation can be woefully logic [41]) that is (a) compositional, (b) supports proving insecure in the presence of interference from others [10], a general enough definition of security to encode a variety [17]–[19] or due to scheduling [20], [21]. of security policies, and (c) supports precise reasoning. We For these reasons, being able to formally prove that argue that achieving all three together requires a new style concurrent software does not leak its secrets (to the wrong of program logic for information flow security. places at the wrong times) has been an active and open In this paper, we present V ERONICA. V ERONICA is, topic of research for at least the past four decades [22], to our knowledge, the first compositional program logic [23]. Despite an impressive array of work over that time, for proving concurrent programs information flow secure the present situation remains highly unsatisfactory. With that supports high-precision reasoning about a wide range contemporary proof methods one is forced to choose be- of security policies and program behaviours (e.g. expres- tween expressiveness (e.g. [24]–[27]), on the one hand, and sive declassification, value-dependent classification, secret- precision (e.g. [10], [19], [28]–[33]), on the other. dependent branching). Just as importantly, V ERONICA em-
bodies a new approach for engineering such logics that The top-right thread’s decision (Figure 1e, line 3) about can be re-used elsewhere. This approach we call decoupled which output buffer it should copy the data in buf to is functional correctness (DFC), which we have found leads to dictated by the outmode variable. When outmode indicates a simple and clean logic, even while achieving this unprece- that the > buffer >buf should be used, the top-right thread dented combination of features. Precision is supported by additionally performs a signature check (via the CK func- reasoning about a program’s functional properties. However, tion, lines 7–8) on the data to decide if it is safe to declassify the key insight of DFC is that this reasoning can and and copy additionally to the ⊥buf output buffer. This con- should be separated from reasoning about its information current program implements a delimited release [44] style flow security. As we explain, DFC exploits compositional declassification policy, which states that > data that passes functional correctness as a common means to unify together the signature check, plus the results of the signature check reasoning about various security concerns. itself for all > data, are safe to declassify. The language We provide an overview of V ERONICA in Section 2. of V ERONICA includes the declassifying assignment com- Section 3 then describes the general security property that it mand :c = and the declassifying output command b!. Besides enforces, and so formally defines the threat model. Section 4 delimited release style declassification policies, we we will describes the programming language over which V ERONICA see later in the examples of Section 6 that our security has been developed. Section 5 then describes the V ERONICA condition also supports stateful declassification policies. logic, whose virtues are further demonstrated in Section 6. Clearly, if inmode and outmode disagree, the concurrent Section 7 considers related work before Section 8 concludes. execution of the threads might behave insecurely (e.g. the All results in this paper have been mechanised in the top-middle thread might place private > data into buf , interactive theorem prover Isabelle/HOL [42]. Our Isabelle which the top-right thread then copies to ⊥buf and is formalisation is available online [43]. subsequently output on the public channel ⊥). Therefore, the security of this concurrent program rests on the shared data 2. An Overview of V ERONICA invariant that inmode and outmode agree (whenever lock ` 2.1. Decoupling Functional Correctness is acquired and released). This is a functional correctness property. There are a number of other such functional prop- Figure 1a depicts the data-flow architecture for a erties, somewhat more implicit, on which the system’s se- very simple, yet illustrative, example system. This exam- curity relies, e.g. that neither thread will modify buf unless ple is inspired by a real world security-critical shared- they hold the lock `, and likewise for inmode and outmode , memory concurrent program [10]. This example purpose- plus that only one thread can hold the lock ` at a time. fully avoids some of V ERONICA’s features (e.g. secret- Similarly, the security of the declassification actions dependent branching and runtime state-dependent declassifi- performed by the top-right thread rests on the fact that it cation policies), which we will meet later in Section 6. Ver- only declassifies after successfully performing the signature ifying it requires highly precise reasoning, and the security check, in accordance with the delimited release policy. policy it enforces involves both data-dependent sensitivity Thus one cannot reason about the security of this con- and delimited release style declassification [44], features that current program in the absence of functional correctness. until now have never been reconciled before. However, one of the fundamental insights of V ERONICA is The system comprises four threads, whose code appears that functional correctness reasoning should be decoupled in Figure 1 (simplified a little for presentation). The four from security reasoning. This is in contrast to many recent threads make use of a shared buffer buf protected by a logics for concurrent information flow security, notably [28], lock `, which also protects the shared flag variable valid . the C OVERN logic of [10] and its antecedents [19], [30] as The top-middle thread (Figure 1c) copies data into the well as [31], [32] plus many prior logics for sequential pro- shared buffer, from one of two input/output (IO) channels: ⊥ grams [2], [5], [8], [9], [45], [46] and hardware designs [47]. (a public channel whose contents is visible to the attacker) V ERONICA decouples functional correctness reasoning and > (a private channel, not visible to the attacker). The from security reasoning by performing the latter over pro- right-top thread (Figure 1e) reads data from the shared grams that carry functional correctness annotations {Ai } buffer buf and copies it to an appropriate output buffer on each program statement si . Thus program statements are (either ⊥buf for ⊥ data or >buf for > data) for further of the form {Ai } si . Here, {Ai } should be thought of as processing by the remaining two output threads. akin to a Hoare logic precondition [48]. It states conditions Each of the bottom threads outputs from its respective that are known to be true whenever statement si is executed output buffer to its respective channel; one for > data in the concurrent program. We call this resulting approach (Figure 1b) and the other for ⊥ data (Figure 1d). decoupled functional correctness (DFC). The decision of the top-middle thread (Figure 1c, line 2), The contents of each of the annotations in Figure 1c whether to input from the ⊥ channel or the > one, is and Figure 1e have been omitted in the interests of brevity dictated by the shared variable inmode . The valid variable (they can be found in our Isabelle formalisation), and simply (initially zero) is set to 1 by the top-middle thread once it replaced by identifiers {Ai }. has filled the buf variable, and is then tested by the top-right For verifying the security of the top-right thread (Fig- thread (Figure 1e, line 2) to ensure it doesn’t consume data ure 1e), annotations {A11 } through {A15 } are most im- from buf before the top-middle thread has written to buf . portant: {A11 } would imply that buf holds an input read
1 {A8 } acquire(`); >buf 1 {A1 } acquire(`); {A9 } if valid = 1 > > 2 2 {A2 } if inmode = 0 3 {A10 } if outmode = 0 3 {A3 } buf ← ⊥ 4 {A11 } ⊥buf := buf buf 4 else 5 else 5 {A4 } buf ← > 6 {A12 } >buf := buf ; 6 endif; 7 {A13 } d :c = CK (>buf ); ⊥ ⊥buf ⊥ 7 {A5 } valid := 1; 8 {A14 } if d = 0 (a) Data flows. Dotted lines denote de- 8 {A6 } release(`) 9 {A15 } ⊥buf :c = >buf classification. (c) Reading data into a shared buffer. 10 endif 11 endif {A0 } > ! >buf 12 endif ; 1 1 {A7 } ⊥ ! ⊥buf 13 {A16 } release(`) (b) Outputting > data. (d) Outputting ⊥ data. (e) Copying and declassifying data. Figure 1: Co-operative Use of a Shared Buffer. Green {Ai } are functional correctness annotations (whose contents we omit). from channel ⊥ (justifying why copying its contents to 2.2. Compositional Enforcement the ⊥ variable ⊥buf is secure), and {A12 } would imply How can we prove that the concurrent program of Fig- likewise for channel >. {A13 } would imply that >buf ure 1 doesn’t violate information flow security, i.e. that no > holds > data and {A14 } that d holds the result of the data is leaked, unless it has been declassified in accordance signature check. Finally, {A15 } implies that the signature with the delimited release policy? check passed, justifying why the declassifying assignment Doing so in general benefits from having a compo- to ⊥buf is secure. sitional reasoning method, namely one that reasons over The other annotations encode functional correctness in- each of the program’s threads separately to deduce that the formation needed to justify the validity of the aforemen- concurrent execution of those threads is secure. tioned annotations. For instance, annotation {A2 } in Fig- Compositional methods for proving information flow ure 1c implies that the thread holds the lock `; {A3 } that properties of concurrent programs have been studied for inmode is zero, while {A4 } the opposite. Annotation {A5 } decades [20], [21]. Initial methods required one to prove on the other hand tracks information about the contents that each thread was secure ignorant of the behaviour of of buf , namely if inmode is zero then buf holds the last other threads [20], [21], [24]. Such reasoning is sound but input read from channel ⊥, and it holds the last input read necessarily imprecise: for instance when reasoning about from channel > otherwise1 . the top-middle thread (Figure 1c) we wouldn’t be allowed Thus the annotations {Ai } afford highly precise reason- to assume that the top-right thread (Figure 1e) adheres to ing about the security of each thread, while decoupling the the locking protocol that protects buf . functional correctness reasoning. Following Mantel et al. [30], more modern compo- The idea of using annotations {Ai } we repurpose from sitional methods have adopted ideas from rely-guarantee the Owicki-Gries proof technique [49] for concurrent pro- reasoning [35] and concurrent separation logic [50], to allow grams. Indeed, there exist a range of standard techniques more precise reasoning about each thread under assumptions for inferring and proving the soundness of such annotations it makes about the behaviour of others (e.g. correct locking (i.e. for carrying out the functional correctness reasoning), discipline) [10], [19], [28]. However, the precision of these from the past 40 years of research on concurrent program methods comes at the price of expressiveness: specifically, verification. V ERONICA integrates multiple such techniques their inability to reason about declassification. By decou- in the Isabelle/HOL theorem prover, each of which has been pling functional correctness reasoning, V ERONICA achieves proved sound from first principles, thereby ensuring the both precision and expressiveness. correctness of its foundations. As we explain later, external The V ERONICA logic—V ERONICA’s compositional IFC program verifiers may also be used to verify functional proof method—has judgements of the form lvl A ` c, where correctness, giving up V ERONICA’s foundational guarantees lvl A is a security level (e.g. > or ⊥ in the case of Figure 1) in exchange for ease of verification. representing level of the attacker and c is a fragment of Given a correctly annotated program, V ERONICA then program text (i.e. a program statement). This judgement uses the information encoded in the annotations to prove ex- holds if the program fragment c doesn’t leak information pressive security policies, as we outline in the next section. to level lvl A that lvl A should not be allowed to observe. For the code of each thread t, one uses the rules of V ERONICA’s logic to prove that lvl A ` c holds, where lvl A ranges over 1. {A5 } effectively encodes buf ’s (state-dependent) sensitivity, and all possible security levels. By doing so one establishes takes the place of dependent security types and labels from prior systems. that the concurrent program is secure, under the assumption
Security User Input Step À: Defining the Security Policy 1 Policy Definition Generated Output Framework The first step is to define the security policy that is to be 2 4 enforced. This involves two tasks. The first is to choose an appropriate lattice of security levels [51] and then to assign Security Security Proof 1 Security security levels to shared variables (e.g. in the example of Proof 1 Proof i Figure 1, ⊥buf and d both have level ⊥, while >buf has Compositional Annotations Security Logic 5 level >). A variable’s security level is given by the (user Whole supplied) function L, which assigns levels to variables. For Thread i Thread i Thread i Annotated Thread i Program variable v , L(v) defines the maximum sensitivity of the data Thread i Security Thread i Proof that v is allowed to hold at all times. Soundness Theorem In V ERONICA not all shared variables need be assigned 3 a security level, meaning that L is allowed to be a partial Security function. For instance, in the example of Figure 1, buf has Functional Security Proof 1 Correctness Proof 1 Proof i no level assigned (i.e. L(buf ) is undefined). The security Compositional policy does not restrict the sensitivity of the data that such Functional Correctness unlabelled variables are allowed to hold. This is useful for Figure 2: Proving a program secure in V ERONICA. shared variables like buf that form the interface between two threads and whose sensitivity is governed by a data- dependent contract [10]. In the example, this allows buf that the concurrent program is functionally correct (i.e. each (whenever valid is 1) to hold ⊥ data when inmode and of its annotations {Ai } hold when the concurrent program outmode are both zero, and > data when inmode and is executed). As mentioned, functional correctness can be outmode are both nonzero. proved using a range of well-established techniques that The second part of defining the security policy is to integrate into V ERONICA. specify when and how declassification is allowed to occur. In Unlike recent compositional proof methods (c.f. [10], order to maximise expressiveness, V ERONICA supports dy- [19], [28], [30]), the judgement of V ERONICA has no need namic, state-dependent declassification policies. Such poli- to track variable stability information (i.e. which variables cies are encoded via the (user supplied) predicate D. won’t be modified by other threads), nor any need for a For a source security level lvl src and destination security flow-sensitive typing context to track the sensitivity of data level lvl dst , the program command c is allowed to declassify in shared program variables, nor does it track constraints on the lvl src -sensitivity value v to level lvl dst in system state σ the values of program variables. Instead, this information is precisely when D(lvl src , lvl dst , σ, v, c) holds. Note that the provided via the annotations {Ai }. command c is either a declassifying assignment “{Ai } For example, the annotation {A11 } in Figure 1 (Fig- x :c= E ” (in which case lvl dst is the label L(x) assigned ure 1e, line 4) states that: (1) when valid is 1, if inmode to the labelled variable x) or a declassifying output “{Ai } is 0 then buf contains the last input read from channel ⊥ lvl dst b! E ”. In either case, lvl src is the security level of the and otherwise it contains the last > input; (2) the top-right expression E and v is the result of evaluating E in state σ . thread holds the lock `; (3) inmode and outmode agree; and This style of declassification predicate is able to support (4) outmode is 0 and valid is 1. Condition (1) implicitly various declassification policies, including delimited release encodes sensitivity information about the data in the shared style policies as in the example of Figure 1. We discuss variable buf ; (2) encodes stability information; while (3) precisely how delimited release policies are encoded as and (4) are constraints on shared program variables. declassification predicates D later in Section 3.4. Other To prove that the assignment on line 4 of Figure 1e is declassification policies are encountered in Section 6. secure, V ERONICA requires one to show that the sensitivity of the data contained in buf is at most ⊥ (the level of ⊥buf ). Step Á: Supply Annotations However one gets to assume that the annotation at this Having defined the security policy, the second step to point {A11 } holds. In this case, the obligation is discharged proving a concurrent program secure using V ERONICA is to straightforwardly from the annotation. The same is true for supply sufficient functional correctness annotations {Ai } for other other parts of this concurrent program. In this way, each thread. In the example of Figure 1, while their contents V ERONICA leans on the functional correctness annotations is not shown, these annotations are already present. However to establish security, and utilises compositional functional in practice, users of V ERONICA will start with un-annotated correctness to unify reasoning about various security con- programs for which functional correctness annotations {Ai } cerns (e.g. declassification, state-dependent sensitivity, etc.). are then supplied to decorate statements c of each thread, encoding what facts are believed to be true about the state 2.3. Proving a Concurrent Program Secure of the concurrent program whenever statement c executes. Figure 2 depicts the process of proving a concurrent Note that, because these annotations will be verified later program secure using V ERONICA. The circled numbers (in step Â), there is no need to trust the process that gener- indicate the main steps and their ordering. ates them. The current Isabelle incarnation of V ERONICA
includes a proof-of-concept strongest-postcondition style Step Ä: Whole Program Security Proof annotation inference algorithm, whose results can then be manually tweaked by the user as necessary. Users are also With both functional correctness and security proved of free to employ external, automatic program analysis tools each thread, the soundness theorem of the V ERONICA logic to infer functional correctness annotations, or to supply can then be applied to derive a theorem stating that the annotations manually, without fear of compromising the whole concurrent program is secure. This theorem is stated foundational guarantees of V ERONICA. formally in Section 5.3. However, intuitively it says that the whole concurrent program is secure if, for each thread t, Step Â: Verifying Functional Correctness t’s functional correctness annotations are all valid (i.e. each Having obtained the functional correctness annota- holds whenever the corresponding statement of t is executed tions {Ai }, the next step is to prove their validity. This in the concurrent program)—step —and t is judged secure means proving that the concurrent program is functionally by the rules of the V ERONICA logic—step Ã. correct, for which there exist numerous compositional tech- niques [35], [49]. 3. Security Definition V ERONICA incorporates two standard techniques in the Isabelle/HOL formalisation: the Owicki-Gries method [49] V ERONICA proves an information flow security property and Rely-Guarantee reasoning [35]. V ERONICA’s Owicki- designed to capture a range of different security policies. To Gries implementation is borrowed from the seminal work maximise generality, the security property is phrased in a of Prensa Nieto [52], [53]. Using it to verify (correct) knowledge-based (or epistemic) style, which as others have functional correctness annotations requires little effort for argued [13], [14], [55], [56] is preferable to traditional two- experienced Isabelle users, by guiding Isabelle’s proof au- run formulations. Before introducing the security property tomation tactics. Like V ERONICA’s Owicki-Gries method, and motivating the threat model that it formally encodes, its Rely-Guarantee implementation is for verifying func- we first explain the semantic model of concurrent program tional correctness annotations only, and ignores security execution in which the property is defined. (c.f. [19], [30]). It requires the user to supply rely and Along the way, we highlight the assumptions encoded guarantee conditions for each thread. Such conditions can in that semantic model and in the formal security property. be defined straightforwardly from the locking protocol of Following Murray and van Oorschot [57], we distinguish a concurrent program and in principle could be inferred; adversary expectations, which are assumptions about the however, we leave that inference for future work. attacker (e.g. their observational powers); from domain hy- If one wishes to forego the foundational assurance of potheses, which are assumptions about the environment (e.g. Isabelle/HOL, one can also employ external verification the scheduler) in which the concurrent program executes. tools to prove annotation validity. By doing so one may elide non-essential annotations, which are those on statements 3.1. Semantic Model other than output statements, declassifications, assignments to labelled variables, and branches on unlabelled data. Our Concurrent programs comprise a finite collection of n formalisation includes a proof-of-concept of this approach, threads, each of which is identified by a natural number: in which the concurrent C program verifier VCC [54] is 0, . . . , n−1. Threads synchronise by acquiring and releasing employed on a C translation of the example in Figure 1 to locks and communicate by modifying shared memory. Ad- prove the functional correctness of its essential annotations, ditionally, threads may communicate with the environment sufficient to guarantee its security. outside the concurrent program by inputting and outputting Step Ã: Verifying Security values from/to IO channels. Without loss of generality, there is one channel for each security level (drawn from the user- With functional correctness proved, the user is then free supplied lattice of security levels). to use the functional correctness annotations to composition- ally prove the security of the concurrent program. To do this, Global States σ the user applies the rules of the V ERONICA logic to each of the program’s threads. V ERONICA exploits the functional Formally, the global states σ of the concurrent pro- correctness assertions to provide a simple logic resembling gram are tuples (env σ , mem σ , locks σ , tr σ ). The global state a flow-insensitive type system. Each statement is verified contains all resources that are shared between threads. We independently of its context. The logic is compositional consider each in turn. and allows each thread to be verified in isolation. Rules for output statements require that the functional correctness Channels and the Environment env σ annotations imply that the expression always evaluates to the same result given states that are not distinguishable to the env σ captures the state of the external environment attacker. Similarly, the rules for declassification statements (i.e. the IO channels). For a security level lvl , require that the annotations are sufficient to imply that the env σ (lvl ) is the (infinite) stream of values yet to declassification predicate holds. We defer a full presentation be consumed from the channel lvl in state σ . of the logic to Section 5.
Domain Hypothesis In this model of channels, reading thread id and code) of each of the n threads. Thus a global from a channel never blocks and always returns the next configuration is a tuple: (ls 0 , . . . , ls n−1 , σ, sched ). value to be consumed from the infinite stream. This Concurrent execution, and the aforementioned schedul- effectively assumes that all channel inputs are faithfully ing model, is formally defined by the rules of Figure 7 buffered and never dropped by the environment. Block- (relegated to the appendix for brevity). These rules define ing can be simulated by repeatedly polling a channel. a single-step relation · → · on global configurations. Zero- and multi-step execution is captured in the usual way by the Shared Memory mem σ reflexive, transitive closure of this relation, written · →∗ ·. mem σ is simply a total mapping form variable names (excluding locks) to their corresponding values: mem σ (v) 3.2. System Security Property and Threat Model denotes the value of variable v in state σ . We now define V ERONICA’s formal security property, formalising the threat model and adversary expectations. Locks locks σ Attacker Observations locks σ captures the lock state and is a partial function from lock names to thread ids (natural numbers in the range Adversary Expectation: Our security property consid- 0 . . . n − 1): for a lock `, locks σ (`) is defined iff lock ` is ers a passive attacker observing the execution of the currently held in state σ , in which case its value is the id concurrent program. We assume that the attacker is able of the thread that holds the lock. to observe outputs on certain channels and associated Events e and Traces tr σ declassification events. Specifically, the attacker is asso- ciated with a security level lvl A . Outputs on all chan- For the sake of expressiveness, we store in the global nels lvl ≤ lvl A the attacker is assumed to be able to ob- state σ the entire history of events tr σ that has been per- serve. Likewise all declassifications to levels lvl ≤ lvl A . formed by the concurrent program up to this point. Each such history is called a trace, and is simply a finite list of Adversary Expectation: The attacker has no other events e. Events e comprise: input events inhlvl , vi which means to interact with the concurrent program, e.g. by record that value v was input from the channel lvl ; output modifying its code. We additionally assume that the events outhlvl , v, Ei which record that value v , the result attacker does not have access to timing information. of evaluating expression E , was output on channel lvl ; and declassification events dhlvl , v, Ei which record that The attacker’s observational powers are formalised by the value v , the result of evaluating expression E , was defining a series of indistinguishability relations as follows. declassified to level lvl . Expression E is included to help specify the security property (see e.g. Definition 3.2.7). Definition 3.2.1 (Event Visibility). We say that an input Ordinary (non-declassifying) output and input com- event inhlvl , vi (respectively output event outhlvl , v, Ei or mands produce output and input events respectively. De- declassification event dhlvl , v, Ei) is visible to the attacker classifying assignments and declassifying outputs produce at level lvl A iff lvl ≤ lvl A . Letting e be the event, in this declassification events. As with much prior work on declas- case we write visible lvl A (e). sification properties [58], declassification actions produce Trace indistinguishability is then defined straightfor- distinguished declassification events that make them directly wardly, noting that we write tr P to denote filtering from visible to the security property. trace tr all events that do not satisfy the predicate P . The Schedule sched Definition 3.2.2 (Trace Indistinguishability). We say that The schedule sched is an infinite list (stream) of thread two traces tr and tr 0 are indistinguishable to the attacker ids i. Scheduling occurs by removing the first item i from at level lvl A , when tr visible lvl A = tr 0 visible lvl A . lvl A the stream and then executing the thread i for one step In this case, we write tr ≈ tr 0 . of execution. (Longer execution slices can of course be simulated by repeating i in the schedule.) This process is Attacker Knowledge of Initial Global State repeated ad infinitum. If thread i is stuck (e.g. because it Besides defining what the attacker is assumed to is waiting on a lock or has terminated) then the system observe (via the indistinguishability relation on traces), we idles (i.e. does nothing) for an execution step, to mitigate also need to define what knowledge the attacker is assumed scheduling leaks (e.g. as implemented in seL4 [59]). to have about the initial global state σinit of the system. Domain Hypothesis V ERONICA assumes determinis- Adversary Expectation: The attacker is assumed to tic, sequentially-consistent, instruction-based schedul- know the contents that will be input from channels at ing [60] (IBS) of threads against a fixed, public schedule. levels lvl ≤ lvl A and the initial values of all labelled Global Configurations and Concurrent Execution · → · variables v for which L(v) ≤ lvl A . A global configuration combines the shared global This assumption is captured via an indistinguishability state σ with the schedule sched and the local state ls i (the relation on global states σ . This relation is defined by
first defining indistinguishability relations on each of σ ’s Definition 3.2.6 (Attacker Uncertainty). A global state σ components. belongs to the set uncertainty lvl A (ls, σinit , sched , tr ) iff it and σinit are indistinguishable, given the attacker’s knowl- Definition 3.2.3 (Environment Indistinguishability). We say edge about the initial state, and if σ can give rise to that two environments env and env 0 are indistinguishable a trace tr σ0 that is indistinguishable from tr . Formally, to the attacker at level lvl A when all channels visible to the uncertainty lvl A (ls, σinit , sched , tr ) is the set of σ where attacker have identical streams, i.e. iff ∀lvl ≤ lvl A . env (lvl ) = env 0 (lvl ). lvl A σ ≈ σinit ∧ lvl A ∃ls 0 σ 0 sched 0 . (ls, σ, sched ) →∗ (ls 0 , σ 0 , sched 0 ) ∧ In this case we write env ≈ env 0 . lvl A tr ≈ tr σ0 Definition 3.2.4 (Memory Indistinguishability). We say that two memories mem and mem 0 are indistinguishable to the attacker at level lvl A when they agree on the values of all The Security Property labelled variables v visible to the attacker, i.e. iff Finally, we can define the security property. This 0 requires roughly that the attacker’s uncertainty can ∀v. L(v) ≤ lvl A =⇒ mem(v) = mem (v), decrease (i.e. they can refine their knowledge) only when where L(v) ≤ lvl A implies L(v) is defined. declassification events occur, and that all such events must lvl A respect the declassification policy encoded by D. In other In this case, we write mem ≈ mem 0 . words, the guarantee provided by V ERONICA under the We can now define when two (initial) global states are threat model formalised herein is that: indistinguishable to the attacker. Security Guarantee: The attacker is never able to learn Definition 3.2.5 (Global State Indistinguishability). We say any new information above what they knew initially, ex- that two global states σ and σ 0 are indistinguishable to the cept from declassification events but those must always attacker at level lvl A iff respect the user-supplied declassification policy. lvl A lvl A env σ ≈ env σ0 ∧ mem σ ≈ mem σ0 ∧ lvl A This guarantee is formalised by defining a gradual re- locks σ = locks σ0 ∧ tr σ ≈ tr σ0 lease-style security property [55]. We first define when the lvl A occurrence of an event e is secure. In this case we write σ ≈ σ 0 . Definition 3.2.7 (Event Occurrence Security). Con- Domain Hypothesis Under this definition, the attacker sider an execution beginning in some initial configura- knows the entire initial lock state. Thus we assume that tion (ls, σ, sched ) that has executed to the intermediate the initial lock state encodes no secret information. configuration (ls 0 , σ 0 , sched 0 ) from which the event e occurs. Attacker Knowledge from Observations This occurrence is secure against the attacker at level lvl A , written esec lvl A ((ls, σ, sched ), (ls 0 , σ 0 , sched 0 ), e), iff Given the attacker’s knowledge about the initial • When e is a declassification event dhlvl dst , v, Ei state σinit and some observation arising from some trace tr visible to the attacker (i.e. lvl dst ≤ lvl A ), then being performed, we assume that the attacker will then D(L(E), lvl dst , σ 0 , v, c) must hold, where c is the current attempt to refine their knowledge about σinit . program command whose execution produced e (i.e. the Adversary Expectation: The attacker is assumed to head program command of the currently executing thread know the schedule sched and the initial local state ls i in (ls 0 , σ 0 , sched 0 )). Here, L(E) is defined when L(v) (i.e. the code and thread id i) of each thread. is defined for all variables v mentioned in E and in that case is the least upper bound of all such L(v), and Given that information, of all the possible initial states D(L(E), lvl dst , σ 0 , v, c) is false when L(E) is not defined. from which σinit might have been drawn, perhaps only a • Otherwise, if e is not a declassification subset can give rise to the observation of tr . We assume event dhlvl dst , v, Ei that is visible to the attacker, the attacker will perform this kind of knowledge inference, then the attacker’s uncertainty cannot decrease by which we formalise following the epistemic style [55]. observing it, i.e. we require that To define the attacker’s knowledge, we define the at- tacker’s uncertainty about the initial state σinit (i.e. the uncertainty lvl A (ls, σ, sched , tr σ0 ) ⊆ attacker’s belief about the set of all initial states from uncertainty lvl A (ls, σ, sched , tr σ0 · e) which σinit might have been drawn) given the initial sched- ule sched and local thread states ls 0 , . . . , ls n−1 , and the Definition 3.2.8 (System Security). The concurrent program trace tr that the attacker has observed. Writing simply with initial local thread states ls = (ls 0 , . . . , ls n−1 ) is se- ls to abbreviate the list ls 0 , . . . , ls n−1 , we denote this cure against an attacker at level lvl A , written syssec lvl A (ls), uncertainty lvl A (ls, σinit , sched , tr ) and define it as follows. iff, under all schedules sched , event occurrence security
always holds during its execution from any initial starting 3.4.1. Formalising Delimited Release state σ . Formally, we require that Delimited release [44] weakens traditional noninterfer- ∀sched σ ls 0 σ 0 sched 0 ls 00 σ 00 sched 00 e. ence [34] by permitting certain secret information to be (ls, σ, sched ) →∗ (ls 0 , σ 0 , sched 0 ) ∧ released to the attacker. Which secret information is allowed (ls 0 , σ 0 , sched 0 ) → (ls 00 , σ 00 , sched 00 ) ∧ to be released is defined in terms of a set of escape hatches: tr σ00 = tr σ0 · e =⇒ expressions that denote values allowed to be released. esec lvl A ((ls, σ, sched ), (ls 0 , σ 0 , sched 0 ), e) Delimited release then strengthens the indistinguishabil- ity relation on the initial state to require that any two states 3.3. Discussion related under this relation also agree on the values of the As with other gradual release-style properties, ours does escape hatch expressions. One way to understand delimited not directly constrain what information the attacker might release as a weakening of noninterference is to observe that, learn when a declassification event occurs, but merely that in changing the relation in this way, it is effectively encoding those are the only events that can increase the attacker’s the assumption that the attacker might already know the knowledge. This means that, of the four semantic principles secret information denoted by the escape hatch expressions. of declassification identified by Sabelfeld and Sands [61], To keep our formulation brief, we assume that the initial our definition satisfies all but non-occlusion: “the presence memory contains no secrets. Thus all secrets are contained of declassifications cannot mask other covert information only in the input streams (channels). Then escape hatches leaks” [61]. Consider the following single-threaded program. denote values that are allowed to be released as functions on lists vs of inputs (to be) consumed from a channel. 1 {A17 } if birthYear > 2000 A delimited release policy E is a function that given 2 {A18 } ⊥ b! birthDay source and destination security levels lvl src and lvl dst re- 3 else turns a set of escape hatches denoting the information that 4 {A19 } ⊥ b! birthMonth is allowed to be declassified from level lvl src to level lvl dst . Suppose the intent is to permit the unconditional release of For example, to specify that the program is always a person’s day and month of birth, but not their birth year. A allowed to declassify to ⊥ the average of the last five inputs naive encoding in the declassification policy D that checks read from the > channel, one could define E(>, ⊥) = whether the value being declassified is indeed either the {λvs. if len(vs) ≥ 5 then avg(take(5, rev (vs))) else 0}, value of birthDay or birthMonth would judge the above where avg(xs) calculates the average of a list of values xs , program as secure, when in fact it also leaks information take(n, xs) returns a new list containing the first n values about the birthYear . from the list xs , and rev (xs) is the list reversal function. Note also, since declassification events are directly vis- To define delimited release, we need to define when ible to our security property, that programs that incorrectly two initial states σ agree under the escape hatches E . Since declassify information but then never output it on a public escape hatches apply only to the streams contained in the channel can be judged by our security condition as insecure. environment env σ , we define when two such environments Finally, and crucially, note that our security condition agree under E . As earlier, this agreement is defined relative allows for both extensional declassification policies, i.e. to an attacker observing at level lvl A , and requires that those that refer only to inputs and outputs of the program, all escape hatches that yield values that the attacker is as well as intensional policies that also refer to the program allowed to observe always evaluate identically under both state. Section 6 demonstrates both kinds of policies. We now environments. consider one class of extensional policies: delimited release. Definition 3.4.1 (Environment Agreement under E ). Two 3.4. Encoding Delimited Release Policies environments env and env 0 agree under the delimited release policy E for an attacker at level lvl A , written The occlusion example demonstrates that programs that lvl A ,E branch on secrets that are not allowed to be released and env ≈ env 0 , iff, for all levels lvl src and all levels lvl dst ≤ then perform declassifications under that secret context are lvl A , and escape hatches h ∈ E(lvl src , lvl dst ), h applied to likely to leak more information than that contained in the any finite prefix of env (lvl src ) yields the same value as when declassification events themselves, via implicit flows. applied to an equal length prefix of env 0 (lvl src ). However, in the absence of such branching, our security We then define when two initial states σ and σ 0 agree for condition can in fact place bounds on what information is a delimited release policy E . The following definition is a released. Specifically, we show that it can soundly encode slight simplification of the one in our Isabelle formalisation delimited release [44] policies as declassification predi- (see Definition A.1 in the appendix), which is more general cates D for programs that do not branch on secrets that because it considers arbitrary pairs of states in which some are not allowed to be declassified to the attacker. trace of events might have already been performed. We define an extensional delimited release-style security condition and show how to instantiate the declassification Definition 3.4.2 (State Agreement under E ). States σ and σ 0 predicates D so that when system security (Definition 3.2.8) agree under the delimited release policy E for an attacker lvl A ,E lvl A holds, then so does the delimited release condition. at level lvl A , written σ ≈ σ 0 , iff (1) σ ≈ σ 0 , (2) their
lvl A ,E memories agree on all variables, and (3) env σ ≈ env σ0 . states σ 0 that satisfy the annotation A, E evaluates in σ 0 to the same value that h evaluates to when applied to the Here, condition (2) encodes the simplifying assumption that lvl src inputs consumed so far in σ 0 . the initial memories contain no secrets. Delimited release is then defined extensionally in the Recall this encoding is sound only for programs that style of traditional two-run noninterference property. do not branch on secrets that the policy E forbids from releasing. We define this condition semantically as a two- Definition 3.4.3 (Delimited Release). The concurrent pro- run property, relegating it to Definition A.2 in the appendix gram with initial local thread states ls = (ls 0 , . . . , ls n−1 ) since its meaning is intuitively clear. We say that a program satisfies delimited release against an attacker at level lvl A , satisfying this condition is free of E -secret branching. written drsec lvl A (ls), iff, The example of Section 3.3 that leaks birthYear via lvl A ,E occlusion is not free of E -secret branching. On the other ∀sched σ σ 0 y. σ ≈ σ 0 ∧ (ls, σ, sched ) →∗ y =⇒ hand, the program in Figure 1 is free of E -secret branching lvl A (∃y 0 . (ls, σ 0 , sched ) →∗ y 0 ∧ tr y ≈ tr y0 ) for the following E that defines its delimited release policy, since the only >-value ever branched on (in Figure 1e, where for a global configuration y = (ls y , σy , sched y ) we line 8) is the result of the signature check CK . write tr y to abbreviate tr σy , the trace executed so far. Definition 3.4.5 (Delimited Release policy for Figure 1). 3.4.2. Encoding Delimited Release in D Allow to be declassified to ⊥ the results of the signature We now encode delimited release policies E via V ERON - check CK always, plus any >-input v when CK (v) = 0. ICA ’s declassification predicates D(lvl src , lvl dst , σ, v, c) E(>, ⊥) = which, recall, judge whether command c declassifying {λvs. if len(vs) 6= 0 then CK (last(vs)) else 0} ∪ value v from level lvl src to level lvl dst in state σ is permit- {λvs. if len(vs) 6= 0 ∧ CK (last(vs)) = 0 then ted. Recall that c is either a declassifying assignment “{Ai } last(vs) else 0} x :c= E ” (in which case lvl dst is the label L(x) assigned to the labelled variable x) or a declassifying output “{Ai } Indeed, V ERONICA can be used to prove that Figure 1 satisfies this delimited release policy by showing that it lvl dst b! E ”. In either case, lvl src is the security level of the satisfies V ERONICA’s system security (Definition 3.2.8), expression E and v is the result of evaluating E in state σ . under the following theorem that formally justifies why To encode delimited release, we need to have V ERONICA can encode delimited release policies. D(lvl src , lvl dst , σ, v, c) decide whether there is an escape hatch h ∈ E(lvl src , lvl dst ) that permits the declassification. Theorem 3.4.1 (Delimited Release Embedding). Let lvl A be Consider some h ∈ E(lvl src , lvl dst ). What does it mean for an arbitrary security level and ls be the initial local thread h to permit the declassification? Perhaps surprisingly, it is states (i.e. thread ids and the code) of a concurrent program not enough to check whether h evaluates to the value v that (1) satisfies syssec lvl A (ls) with D defined according to being declassified in σ . Suppose h permits declassifying the Definition 3.4.4, (2) is free of E -secret branching, and (3) average of the last five inputs from channel > and suppose satisfies all of its functional correctness annotations. Then, in σ that this average is 42. An insecure program might the program is delimited release secure, i.e. drsec lvl A (ls). declassify some other secret whose value just happens to Thus V ERONICA can soundly encode purely extensional se- be 42 in σ , but that declassification would be unlikely to curity properties like Definition 3.4.3. The extensional form satisfy delimited release if the two secrets are independent. of the policy for the Figure 1 example is straightforward Instead, to soundly encode delimited release, one needs and relegated to the appendix (Definition A.3) . to check whether the expression E being declassified is equal to the escape hatch in general. 4. Annotated Programs in V ERONICA To do this, we have D check that in all states in V ERONICA reasons about the security of concurrent which this declassification c might be performed, the escape programs, each of whose threads is programmed in the hatch h evaluates to the value of E in that state. We can language whose grammar is given in Figure 3. overapproximate the set of all states in which c might Most of these commands are straightforward and appear execute by using its annotation {Ai }: all such states must in Figure 1. Loops “{A} while E inv {I} do c” carry satisfy the annotation assuming the program is functionally a second invariant annotation (here “{I}”) that specifies correct (which V ERONICA will prove). Thus we have D the loop invariant, which is key for proving their functional check that in all such states that satisfy the annotation, the correctness [62]. The “stop” command halts the execution escape hatch h evaluates to the expression E . of the thread, and is an internal form used only to define Definition 3.4.4 (Delimited Release Encoding). The en- the semantics of the language. The no-op command “{A} coding of policy E we denote DE . DE (lvl src , lvl dst , σ, v, c) nop” is syntactic sugar for: “{A} x := x”, while “{A} if E holds always when c is not a declassification command. c endif” is sugar for “{A} if E c else {A} nop endif”. Otherwise, let A be c’s annotation and E be the expression The semantics for this sequential language is given that c declassifies. Then DE (lvl src , lvl dst , σ, v, c) holds iff in Figure 8, and is relegated to the appendix since it is there exists some h ∈ E(lvl src , lvl dst ) such that for all straightforward. This semantics is defined as a small step
c ::= {A} x := E (assignment) Similarly, the declassification rules DA SG T Y | {A} x :c=E (declassifying assignment) and DO UT T Y use the annotation A to reason precisely | {A} lvl ! E (output to channel lvl ) about whether D holds at this point during the program. | {A} lvl b! E (declassifying output) | {A} x ← lvl (input from channel lvl ) 5.2. Secret-Dependent Branching | {A} if E c else c endif (conditional) The rule W HILE T Y for loops does not allow | {A} while E inv {A} do c (loop with invariant) the loop guard to depend on a secret. In con- | {A} acquire(`) (lock acquisition) tast, the rule I F T Y for reasoning about conditionals | {A} release(`) (lock release) “{A}if E c1 else c2 endif”does. Its final premise appplies | c; c (sequencing) when the sensitivity of the condition E exceeds that which | stop (terminated thread) can be observed by the attacker lvl A , i.e. when the if- Figure 3: Syntax of V ERONICA threads. condition has branched on a secret that should not be revealed to the attacker. The premise requires proving that the two branches c1 and c2 are lvl A -bisimilar, i.e. that relation on local configurations (ls i , σ) where ls i = (i, c) the attacker cannot distinguish the execution of c1 from is the local state (thread id i and code c) for a thread the execution of c2 . The formal definition of bisimilarity and σ = (env σ , mem σ , locks σ , tr σ ) is the global state (Definition A.7) appears in the appendix. shared with all other threads. bEcmem σ is atomic evalu- V ERONICA includes a set of proof rules to determine ation of expression E in memory mem σ . Notice that the whether two commands are lvl A -bisimiar. These rules have semantics doesn’t make use of the annotations {A}: they are been proved sound but, due to lack of space, we refer merely decorations used to decouple functional correctness. the reader to our Isabelle formalisation for the full details. Briefly, these rules check that both commands (1) perform 5. The V ERONICA Logic the same number of execution steps, (2) modify no labelled The V ERONICA logic defines a compositional method to variables x for which L(x) ≤ lvl A , (3) never input from prove when a concurrent program satisfies system security or output to channels lvl ≤ lvl A , and (4) perform no (Definition 3.2.8), V ERONICA’s security condition. Specifi- declassifications. Thus the lvl A -attacker cannot tell which cally, it defines a set of rules for reasoning over the program command was executed, including via scheduling effects. text of each thread of the concurrent program. A soundness One is of course free to implement other analyses to de- theorem (Theorem 5.3.1) guarantees that programs that are termine bisimilarity. Hence, V ERONICA provides a modular functionally correct and whose threads are proved secure interface for reasoning about secret-dependent branching. using the V ERONICA logic satisfy system security. The rules of the V ERONICA logic appear in Figure 4. 5.3. Soundness They define a judgement resembling that for a flow- Recall that the soundness theorem requires the con- insensitive security type system that has the form: lvl A ` c current program (with initial thread states) ls = where lvl A is the attacker level and c is an annotated thread (ls 0 , . . . , ls n−1 ) to satisfy all of its functional correctness command (see Figure 3). annotations. When this is the case we write ls . 5.1. Precise Reasoning with Annotations Theorem 5.3.1 (Soundness). Let ls = ((0, c0 ), . . . , (n − The rules for V ERONICA explicitly make use of the 1, cn−1 )) be the initial local thread states of a concurrent annotations {A} on program commands to achieve highly program. If ls holds and lvl A ` ci holds for all 0 ≤ i < n, precise reasoning, while still presenting a simple logic to the then the program satisfies system security, i.e. syssec lvl A (ls). user. This is evident in the simplicity of many of the rules of In practice one applies the V ERONICA logic for an Figure 4. To understand how annotations are used to achieve arbitrary attacker security level lvl A , meaning that system precise reasoning, consider the rule O UT T Y for outputting security will hold for attackers at all security levels. on channel lvl . When this output is visible to the attacker The condition ls can be discharged using any of (lvl ≤ lvl A ), this rule uses the annotation A to reason about the techniques implemented in V ERONICA (see Figure 2.3; the sensitivity of the data contained in the expression E Step Â), or via any sound correctness verification method. at this point in the program, specifically to check that this sensitivity is no higher than the attacker level lvl A . This is 6. Further Examples captured by the predicate sensitivity(A, E, lvl A ). For a security level lvl , annotation A and expression E , 6.1. The Example of Figure 1 sensitivity(A, E, lvl ) holds when, under A, the sensitivity of the data contained in E is not greater than lvl . This is not Recall that the concurrent program of Figure 1 imple- a policy statement about L(E) but, rather, uses A to over- ments an extensional delimited release style policy E defined approximates E ’s sensitivity at this point in the program. in Definition 3.4.5 (see also Definition A.3 in the appendix) . lvl sensitivity(A, E, lvl ) ≡ ∀σ σ 0 . σ A ∧ σ 0 A ∧ σ ≈ σ 0 We add a fifth thread, which toggles inmode and =⇒ bEcmem σ = bEcmem σ0 outmode while ensuring they agree, and sets valid to zero.
lvl A ` c1 lvl A ` c2 L(x) is undefined sensitivity(A, E, lvl E ) lvl E ≤ L(x) S EQ T Y UA SG T Y LA SG T Y lvl A ` c1 ; c2 lvl A ` {A}x := E lvl A ` {A}x := E ∀σ. σ A =⇒ D(L(E), L(x), σ, bEcmem σ , {A}x :c = E) ACQ T Y DA SG T Y lvl A ` {A}acquire(`) lvl A ` {A}x :c =E ∀σ. σ A =⇒ D(L(E), lvl , σ, bEcmem σ , {A}lvl b! E) R ELT Y DO UT T Y lvl A ` {A}release(`) lvl A ` {A}lvl b! E lvl lvl A ` c1 lvl A ` c2 ¬sensitivity(A, E, lvl A ) =⇒ ∀i. (i, c1 ) ∼A (i, c2 ) IFTY lvl A ` {A}if E c1 else c2 endif lvl ≤ lvl A =⇒ sensitivity(A, E, lvl A ) L(x) is undefined lvl ≤ L(x) O UT T Y UI N T Y LI N T Y lvl A ` {A}lvl ! E lvl A ` {A}x ← lvl lvl A ` {A}x ← lvl lvl A ` c sensitivity(A, E, lvl A ) sensitivity(I, E, lvl A ) W HILE T Y lvl A ` {A}while E inv{I}do c Figure 4: Rules of the V ERONICA logic. 1 {A20 } acquire(`); explicit (e.g. “Application X wants permission to access 2 {A21 } valid := 0; your microphone. . . ”) or implicit [64] information access 3 {A22 } inmode := inmode + 1; decisions are common in modern computer systems. Yet 4 {A23 } outmode := inmode ; verifying that concurrent programs only allow information 5 {A24 } release(`) access after successful user confirmation has remained out Proving that this 5-thread program ls satisfies this policy of reach for prior logics. We show how V ERONICA supports is relatively straightforward using V ERONICA. We employ such by considering a modification to Figure 1. V ERONICA’s Owicki-Gries implementation to prove that it Specifically, suppose the thread in Figure 1e is replaced satisfies its annotations: ls . We then use the delimited by the one in Figure 5 (left). Instead of using the signature release encoding (Definition 3.4.4) to generate the V ERON - check function CK to decide whether to declassify the > ICA declassification policy D that encodes the delimited input, it now asks the user by first outputting the value to release policy. Next, we use the rules of the V ERONICA be declassified on channel > and then receiving from the logic to compositionally prove that each thread ls i is secure user a boolean response on channel ⊥. for an arbitrary security level lvl : lvl ` ls i . From this Naturally the user is trusted, so it is appropriate for their proof, since we never use the part of the I F T Y rule for response to this question to be received on the ⊥ channel. branching on secrets, it follows that the program is free Recall that ⊥ here means that the information has minimal of E -secret branching (we prove this result in general in secrecy, not minimal integrity. Also, since the threat model our Isabelle formalisation). Then, by the soundness theorem of Section 3.2 forbids the attacker from supplying channel (Theorem 5.3.1) the program satisfies V ERONICA’s system inputs, we can safely trust the integrity of the user response. security property syssec lvl (ls) for arbitrary lvl . Finally, by The declassification policy is then specified (see Fig- the delimited release embedding theorem (Theorem 3.4.1) ure 5 right) as a V ERONICA runtime state-dependent de- it satisfies its delimited release policy E . classification predicate D. This predicate specifies that at all times, the most recent output sent (to the user to confirm) on 6.2. Confirmed Declassification the > channel is allowed to be declassified precisely when the most recent input consumed from the ⊥ channel is 1. Besides delimited release-style policies, V ERONICA is A complete formal statement of the policy satisfied by geared to verifying state-dependent declassification policies. this example is relegated to Definition A.4 in the appendix, Such policies are common in systems in which interactions since it is a trivial unfolding of Definition 3.2.8. The re- with trusted users authorise declassification decisions. For sulting property is purely extensional, since D above refers example, in a sandboxed desktop operating system like only to the program’s input/output behaviour. Qubes OS [63], a user can copy sensitive files from a Proving the modified concurrent program secure pro- protected domain into a less protected one, via an explicit ceeds similarly as for Section 6.1.This example demon- dialogue that requires the user to confirm the release of strates V ERONICA’s advantages over contemporary log- the sensitive information. Indeed, user interactions to make ics like C OVERN [10] and S EC CSL [28], which can-
You can also read