Spacetime Programming - arXiv
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Spacetime Programming A Synchronous Language for Composable Search Strategies Pierre Talbot Université de Nantes Nantes, France pierre.talbot@univ-nantes.fr ABSTRACT The solving procedure usually interleaves two steps: propaga- tion and search. Propagation removes values from the domains arXiv:1907.10922v1 [cs.PL] 25 Jul 2019 Search strategies are crucial to efficiently solve constraint satisfac- tion problems. However, programming search strategies in the ex- that do not satisfy at least one constraint. The search step makes a isting constraint solvers is a daunting task and constraint-based choice when propagation cannot infer more information and back- languages usually have compositionality issues. We propose space- tracks to another choice if the former one did not lead to a solu- time programming, a paradigm extending the synchronous lan- tion. The successive interleaving of choices and backtracks lead to guage Esterel and timed concurrent constraint programming with the construction of a search tree that can be explored with various backtracking, for creating and composing search strategies. In this search strategies. In this paper, the term “search strategy” takes the formalism, the search strategies are composed in the same way as broad sense of any procedure that describes how a CSP is solved. we compose concurrent processes. Our contributions include the In order to attain reasonable efficiency, the programmer must of- design and behavioral semantics of spacetime programming, and ten customize the search strategy per problem [3, 47, 54]. However, the proofs that spacetime programs are deterministic, reactive and to program a search strategy in a constraint solver is a daunting extensive functions. Moreover, spacetime programming provides task that requires expertise and good understanding on the solver’s a bridge between the theoretical foundations of constraint-based intrinsics. This is why various language abstractions emerged to concurrency and the practical aspects of constraint solving. We de- ease the development of search strategies [21, 26, 57, 60]. veloped a prototype of the compiler that produces search strategies One of the remaining problems of search languages is the com- with a small overhead compared to the hard-coded ones. positionality of search strategies: how can we easily combine two strategies and form a third one? Compositionality is important to KEYWORDS build a collection of search strategies reusable across problems. To cope with this compositionality issue, we witness a growing num- synchronous programming, concurrent constraint programming, ber of proposals based on functional programming [41], constraint constraint satisfaction problem, search strategy logic programming [40], and search combinators [42]. However, a recurring issue in these approaches is the difficulty to share infor- 1 INTRODUCTION mation among strategies; we discuss this drawback and others in Constraint programming is a powerful paradigm to model prob- Section 8. lems in terms of constraints over variables. This declarative para- We propose spacetime programming (or “spacetime” for short) to digm solves many practical problems including scheduling, vehicle tackle this compositionality issue. Spacetime is a language based routing or biology problems [33], as well as more unusual problems on the imperative synchronous language Esterel [5] and timed con- such as in musical composition [55]. Constraint programming de- current constraint programming (TCC) [34, 35]. Spacetime extends scribes what the problem is, whereas procedural approaches de- the synchronous model of computation of Esterel with backtrack- scribe how a problem is solved. The programmer declares the con- ing, and refines the interprocess communication mechanism of TCC straints of its problem, and relies on a generic constraint solver to with lattice-based variables. We introduce these features in the fol- obtain a solution. lowing two paragraphs. A constraint satisfaction problem (CSP) is a couple hd, Ci where d is a function mapping variables to sets of values (the domain) and C is a set of constraints on these variables. The goal is to Synchronous Programming with Backtracking. The synchronous find a solution: a set of singleton domains such that every con- paradigm [15] proposes a notion of logical time dividing the exe- straint is satisfied. For example, given the CSP h{x 7→ {1, 2, 3}, y 7→ cution of a program into a sequence of discrete instants. A syn- {1, 2, 3}}, {x > y, x , 2}i, a solution is {x 7→ 3, y 7→ 1}. chronous program is composed of processes that wait for one an- other before the end of each instant. Operationally, we can view a Permission to make digital or hard copies of all or part of this work for personal or synchronous program as a coroutine: a function that can be called classroom use is granted without fee provided that copies are not made or distributed multiple times and that maintains its state between two successive for profit or commercial advantage and that copies bear this notice and the full cita- calls. One call to this coroutine represents one instant that elapsed. tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- The main goal of logical time is to coordinate concurrent processes publish, to post on servers or to redistribute to lists, requires prior specific permission while avoiding typical issues of parallelism, such as deadlock or in- and/or a fee. Request permissions from permissions@acm.org. determinism [22]. PPDP ’19, October 07–09, 2019, Porto, Portugal © 2019 Association for Computing Machinery. Spacetime inherits most of the temporal statements of TCC, and more specifically those of the synchronous language Esterel [5],
PPDP ’19, October 07–09, 2019, Porto, Portugal Pierre Talbot including the delay, sequence, parallel, loop and conditional state- • Spacetime is the first language that unifies constraint-based ments. The novelty of spacetime is to connect the search tree gener- concurrency, synchronous programming and backtracking. ated by a CSP and linear logical time of synchronous programming. This unification bridges a gap between the theoretical foun- Our proposal is captured in the following principle: dations of CCP and the practical aspects of constraint solv- A node of the search tree is explored in exactly one logical instant. ing. A corollary to this first principle is: 2 DEFINITIONS A search strategy is a synchronous process. To keep this paper self-contained, we expose necessary definitions These two principles are illustrated in Sections 4 and 6 with well- on lattice theory which are then used to define constraint program- known search strategies. ming. Given an ordered set hL, ≤i and S ⊆ L, x ∈ L is a lower bound of S if ∀y ∈ S, x ≤ y. We denote the set of all the lower bounds Deterministic Interprocess Communication. The second charac- of S by S ℓ . The element x ∈ L is the greatest lower bound of S if teristic of spacetime is inherited from concurrent constraint pro- ∀y ∈ S ℓ , x ≥ y. The least upper bound is defined dually by revers- gramming (CCP) [38]. CCP defines a shared memory as a global ing the order. constraint store accumulating partial information. The CCP pro- cesses communicate and synchronize through this constraint store Definition 2.1 (Lattice). An ordered set hL, ≤i is a lattice if every with two primitives: tell(c) for adding a constraint c into the store, pair of elements x, y ∈ L has both a least upper bound and a great- and ask(c) for asking if c can be deduced from the store. Concur- est lower bound. We write x ⊔y (called join) the least upper bound rency is treated by requiring the store to grow monotonically and of the set {x, y} and x ⊓ y (called meet) its greatest lower bound. A extensively, which implies that removal of information is not per- bounded lattice has a top element ⊤ ∈ L such that ∀x ∈ L, x ≤ ⊤ mitted. An important result is that any CCP program is a closure and a bottom element ⊥ ∈ L such that ∀x ∈ L, ⊥ ≤ x. operator over its constraint store (a function that is idempotent, As a matter of convenience and when no ambiguity arises, we sim- extensive and monotone). ply write L instead of hL, ≤i when referring to ordered structures. TCC embeds CCP in the synchronous paradigm [34, 35] such Also, we refer to the ordering of the lattice L as ≤L and similarly that an instant is guaranteed to be a closure operator over its store; for any operation defined on L. however information can be lost between two instants. There are An example is the lattice LMax of increasing integers hN , ≥ , maxi two main differences between spacetime and TCC. where N ⊂ N, ≥ is the natural order on N and max is the join op- Firstly, instead of a central and shared constraint store, variables erator. Dually, we also have LMin with the order ≤ and join min. in spacetime are defined over lattice structures. The tell and ask op- The Cartesian product P×Q is defined by the lattice h{(x, y) | x ∈ erations are thus defined on lattices, where tell relies on the least P, y ∈ Q }, ≤× i such that (x 1 , y1 ) ≤× (x 2 , y2 ) if x 1 ≤P x 2 ∧y1 ≤Q y2 . upper bound operation and ask on the order of the lattice. In Sec- Given the lattice L 1 × L 2 , it is useful to define the following projec- tion 3, we formalize a CSP as a lattice that we later manipulate as tion functions, for i ∈ {1, 2} and xi ∈ Li we have πi ((x 1 , x 2 )) 7→ xi . a variable in spacetime programs. For the sake of readability, we also extend the projection over any Secondly, unlike TCC programs, spacetime programs are not subset S ⊆ L 1 × L 2 as πi′ (S) = {πi (x) | x ∈ S }. closure operators by construction. This stems from the negative Given a lattice hL, ≤i, a function f : L → L is extensive if for all ask statement (testing the absence of information) which is not x ∈ L, we have x ≤ f (x). This property is important in language monotone, and the presence of external functions which are not semantics because it guarantees that a program does not lose infor- necessarily idempotent and monotone. As in Esterel, we focus in- mation. More background on lattice theory can be found in [7, 11]. stead on proving that the computation is deterministic and reactive. In addition, we also prove that spacetime programs are extensive 3 LATTICE VIEW OF CONSTRAINT functions within and across instants (Section 5.6). PROGRAMMING Contributions. In summary, this paper includes the following As we will see shortly, a spacetime program is a function exploring contributions: a state space defined over a lattice structure. To illustrate this para- • We provide a language tackling the compositionality issue digm, we choose in this paper to focus on the state space generated of search strategies. We illustrate this claim in Sections 4 by constraint satisfaction problems (CSPs). Hence, we describe the and 6 by reconstructing and combining well-known search lattice of CSPs and the lattice of its state space, called a search tree. strategies. • We extend the behavioral semantics of Esterel to backtrack- 3.1 Lattice of CSPs ing and variables defined over lattices with proofs of deter- Following various works [1, 13, 28, 46], we introduce constraint minism, reactivity and extensiveness (Section 5). programming through the prism of lattice theory. The main obser- • We implement a prototype of the compiler1 , and integrate vation is that the hierarchical structure of constraint programming spacetime into the Java language (Section 7). The evalua- can be defined by a series of lifts. We incrementally construct the tion of the search strategies presented in this paper shows a lattice of CSPs. small overhead compared to the hard-coded ones of Choco [31]. First of all, we define the domain of a variable as an element of a lattice structure. In the case of finite domains, an example is the 1 Open source compiler available at https://github.com/ptal/bonsai/tree/PPDP19. powerset lattice hP(N ), ⊇i with the finite set N ⊂ N and ordered
Spacetime Programming PPDP ’19, October 07–09, 2019, Porto, Portugal by superset inclusion. For instance, a variable x in {0, 1, 2} ∈ P(N ) 3.2 Lattice of Search Trees is less informative than a singleton domain {0}, i.e. {0, 1, 2} ≤ {0}. A novel aspect of this lattice framework is to view the search tree as Other lattices can be used (see e.g. [13]), so we abstract the lattice a lattice as well. It relies on the antichain completion which derives of variable’s domains as hD, ≤i. a lattice to the antichain subsets of its powerset.2 Let Loc be an unordered set of variable’s names. We lift the lat- tice of domains D to the lattice of partial functions Loc ⇀ D. In Definition 3.2 (Antichain completion). The antichain completion operational terms, a partial function represents a store of variables. of a lattice L, written A (L), is a lattice defined as: Definition 3.1 (Store of variables). We write the set of all partial A (L) = h{S ⊆ P(L) | ∀x, y ∈ S, x ≤ y =⇒ x = y}, functions from Loc to D as [Loc ⇀ D]. Let σ, τ ∈ [Loc ⇀ D]. We S ≤ Q if ∀x ∈ S, ∃y ∈ Q, x ≤L yi write π1′ (σ ) the subset of Loc on which σ is defined. The set of variables stores is a lattice defined as: It is equipped with the Smyth order. SV = h[Loc ⇀ D], τ ≤ σ if ∀ℓ ∈ π1′ (τ ), ′ ∃ℓ ∈ π1′ (σ ), ′ τ (ℓ) ≤D σ (ℓ )i The lattice of the search trees is defined as ST = A (CSP). Intu- This order is called the Smyth order [48]. itively, an element q ∈ ST represents the frontier of the search tree being explored. The antichain completion accurately models the We find convenient to turn a partial function σ into a set, called fact that parents’ nodes are not stored in q. Operationally, we view its graph, defined by {(x, σ (x)) | x ∈ π1′ (σ )}. Given a lattice L, the q as a queue of nodes3 , which is central to backtracking algorithms. lattice Store(Loc, L) is the set of the graphs of all partial functions The missing piece to build and explore the CSP state space is from Loc to L also equipped with the Smyth order. In comparison the queueing strategy which allows us to pop and push nodes onto to SV , we parametrize the lattice Store(Loc, L) by its set of locations the queue. Loc and underlying lattice L, so we can reuse it later. Notice that Store(Loc, D) is isomorphic to SV . Definition 3.3 (Queueing strategy). Let L be a lattice and A (L) We turn a logical constraint c ∈ C into an extensive function be its antichain completion. The pair of functions p : SV → SV , called propagator, over the store of variables. For example, given the store d = {x 7→ {1, 2}, y 7→ {2, 3}} and the pop : A (L) → A (L) × L constraint x ≥ y, a propagator p ≥ associated to ≥ gives p ≥ (d) = push : A (L) × Store(N, L) → A (L) {x 7→ {2},y 7→ {2, 3}}. We notice that this propagation step is ex- tensive, e.g. d ≤ p ≥ (d). Beyond extensiveness, a propagator must is a queueing strategy if, for any extensive function f : A (L) × also be sound, i.e. it does not remove solutions of the induced con- L → A (L) × Store(N, L), the function composition push ◦ f ◦ pop straint, to guarantee the correctness of the solving algorithm. is extensive over A (L). We now define the lattice of all propagators SC = hP(Prop), ⊆ i where Prop is the set of all propagators (extensive and sound func- In the context of CSP solving, we have L = CSP and A (L) = ST . tions). The order is given by set inclusion: additional propagators This definition implies that we never remove information from the bring more information to the CSP. We call an element of this lat- queue, which might seem unfortunate to the implementer. In prac- tice a constraint store. The lattice of all CSPs—with propagators tice, the nodes never used again in the future of the program can be instead of logical constraints—is given by the Cartesian product safely removed—for example the non-solution leaves. As examples CSP = SV × SC. of queueing strategies, we have depth-first search (DFS), breadth- Given a CSP hd, {p 1 , . . . , pn }i ∈ CSP, the propagation step is real- first search (BFS) and best-first search. ized by computing the fixpoint of p 1 (p 2 (..pn (d))). We note propaдate : The solutions of a CSP hd, Pi are computed as the fixpoint of the CSP → CSP the function computing this fixpoint. In practice, this function solve({hd, Pi}) which is defined as: function is one crucial ingredient to obtain good performance, and solve : ST → ST this is part of the theory of constraint propagation (e.g. see [1, 44, solve = push ◦ (id × (branch ◦ propagate)) ◦ pop 51]). In the rest of this paper, we keep this propagation step ab- stract, and we delegate it to specialized solvers when needed. This function formalizes the usual steps when solving a constraint Once propagation is at a fixpoint, and if the domain d is not a problem: pop a node from the queue, propagate it, divide it into solution yet, a search step must be performed. Search consists in several sub-problems, and push these sub-problems onto the queue. splitting the state space with a branching function branch : CSP → The output type of each function matches the input type of the next Store(N, CSP) and exploring successively the sub-problems created. one—notice that we use the identity function id to avoid passing We call an element of the lattice Store(N, CSP) the branches. The in- the search tree to propagate and branch. Reaching a fixpoint on dices of the branches serve to order the child nodes. For instance, solve means that we explored the full search tree, and explored all a standard branching function consists in selecting the first non- solutions if there is any. instantiated variable and to divide its domain into two halves—one explored in each sub-problem. If the branching strategy is strictly extensive (x < f (x)) over each branch bi ∈ branch(hd, Pi), and 2 The antichain completion of a lattice L is isomorphic to the set of ideals of L as does not add variables into d, then this solving procedure is guar- shown by Crampton and Loizou [10]. We prefer the antichain formulation because it is closer to the data structure of a queue. anteed to terminate on finite domains. This solving algorithm is 3 Despite the name, this terminology of “queue” does not imply a particular queueing called propagate and search. strategy, i.e. the order in which the nodes are explored.
PPDP ’19, October 07–09, 2019, Porto, Portugal Pierre Talbot 3.3 The Issue of Compositionality with || the parallel composition. The first process is suspended on The solve function is parametrized by a branching and queuing depth 4 until depth becomes greater than or equal to 4. Hence, strategies. However, this does not suffice to program every search the second process is completed first if we initially have depth < 4. strategy. For example, the depth-bounded search strategy—further The limitation of CCP is that it is not possible to write a process for developed in the next section—consists in exploring the search tree the statement “prune the subtree”. This is because a CCP process until a given depth is reached. To program this strategy in the cur- computes over a fixed lattice, such as CSP, but it is not possible rent framework, we must extend the definition of a CSP with a to compute over its antichain completion, which is necessary for depth counter defined over LMax (given in Section 2). The result- creating and exploring its state space. ing search tree is defined as ST 2 = A (CSP ×LMax). We also extend Space component of spacetime. The approach envisioned with solve with two functions: inc for increasing the counter of the child the spacetime paradigm is to view a search algorithm as a set of nodes, and prune for pruning the nodes at the given depth: concurrent processes exploring collaboratively a state space. In solve2 : ST 2 → ST 2 this model, we rewrite solve2 as a parallel composition of processes solve2 = push ◦ (id × (inc ◦ prune ◦ branch ◦ propagate)) ◦ pop as follows (the arrows indicate read/write operations): Although orthogonal to the depth counter, the types of the propagate depth ∈ LMax hd, Pi ∈ CSP and branch functions must be modified to work over CSP × LMax. Another solution would be to project elements of CSP × LMax with additional id functions. A more elaborated version of this idea, re- solve2 = push ◦ (inc || prune || branch || propagate) ◦ pop lying on monads to encapsulate data, is investigated in monadic constraint programming [41]. The search strategies defined in this branches ∈ Store(N, CSP × LMax) framework require the users to have substantial knowledge in func- tional language theory. Similarly, constraint solving libraries are Firstly, we pop a node from the queue which contains the variables made extensible through software engineering techniques such as depth and hd, Pi. Then, similarly to CCP, the processes commu- design patterns. In all cases, a drawback is that it complicates the nicate by reading and writing into these variables. The Cartesian code base, which is hard to understand and extend with new search product of the variables, called the space of the program, is auto- strategies. Moreover, such software architecture varies substantially matically synthesised by the spacetime semantics. This is reflected across solvers. in the type CSP ×LMax of branches. The processes only manipulate The problem is that we need to either modify existing structures branches through dedicated statements, namely space and prune or integrate the strategies into some predefined software architec- (that we introduce below). ture in order to program new search strategies. We call this prob- Time component of spacetime. One remaining question is how to lem the compositionality issue. Our proposal is to rely on language synchronize processes so that every process waits for each other abstractions instead of software abstractions to program search before the next node is popped? Our proposal is to rely on the strategies. notion of synchronous instants of Esterel. During each instant, a process is executed until it encounters a special statement called 4 LANGUAGE OVERVIEW pause.4 Once pause is reached, the process waits for all other pro- We give a tour of the spacetime model of computation and syn- cesses to be paused or terminated. The next instant is then started. tax by incrementally building the iterative-deepening search strat- The novelty in spacetime is to connect the passing of time to egy [19]. A key insight is that this search strategy is developed the expansion of the search tree. Concretely, an instant consists in generically with regard to the state space. performing three consecutive steps: pop a node, execute the pro- cesses until they are all paused, and push the resulting branches 4.1 Model of Computation onto the queue. We repeat these steps until the queue is empty or The model of computation of spacetime is inspired by those of all processes are terminated. (timed) concurrent constraint programming (CCP) and Esterel. We now detail this model of computation through several ex- amples, notably by programming the inc and prune processes. We CCP model of computation. We view the structure of a CCP pro- gram as a lattice hL, , ⊔i where is called the entailment. The delay the presentation of propaдate and branch to Section 6. entailment is the order of the lattice defined as a b ≡ b ≤ a. Fol- lowing Scott’s information systems [45], CCP views the bottom 4.2 Binary Search Tree element ⊥ as the lack of information, the top element ⊤ as all the A spacetime program is a set of Java classes augmented with space- information, the tell operator x ⊔ y as the join of the information time class fields (prefixed by the single_space, world_line or in x and y, and the ask operator x y as an expression that is true single_time keywords) and processes (prefixed by proc or flow if we can deduce y from x. keywords). The type of a spacetime field or local variable is a Java CCP processes communicate through this lattice by querying class that implements a lattice interface providing the entailment for information with the entailment, or adding information with and join operators. A process does not return a value; it acts as join. For example, consider the following definitions of prune and a coroutine mutating the spacetime variables in each instant. In inc: 4 Toensure cooperative behavior among processes, the amount of work to perform (when depth 4 then “prune the subtree”) || (depth = depth ⊔ (depth + 1)) during an instant must be bounded in time.
Spacetime Programming PPDP ’19, October 07–09, 2019, Porto, Portugal contrast, Java method calls are viewed as atomic operations in a is left to the programmer of the lattice. For example, the method spacetime process. x.inc() is defined as x = x + 1, and thus x must be annotated One of the simplest process in spacetime is to generate an infi- by readwrite. These attributes are essential to ensure determin- nite binary search tree: ism when variables are shared among processes, and for correctly class Tree { scheduling processes. public proc binary = Spacetime annotations indicate how a variable evolves in mem- loop ory through time. For this purpose, a spacetime program has three space nothing end; distinct memories in which the variables can be stored: space nothing end; (i) Global memory (keyword single_space) for variables evolv- pause; ing globally to the search tree. A single_space variable end } has a unique location in memory throughout the execution. For example, the counter node is a single_space variable: This process generates a binary tree in which every node is empty; since we explore one node in every instant, we increase its we will decorate these nodes with data later. A branch is created value by one in each instant. with the statement space p end where the process p describes (ii) Backtrackable memory (keyword world_line) for variables the differences between the current node and the child node. In local to a path in the search tree. The queue of nodes is the example, the difference is given by nothing which is the empty the backtrackable memory. For example, the value of the process terminating immediately without effect, thus all generated counter depth must be restored on backtrack in the search nodes will be the same. tree. In each instant, four actions are realized (we connect these ac- (iii) Local memory (keyword single_time) for variables local tions to the model of computation in parenthesis): to an instant and reallocated in each node. A single_time (1) A node is popped from the queue (function pop). variable only exists in one instant. We will encounter this (2) The process is executed until we reach a pause statement last annotation later on. (process between pop and push). (3) We retrieve the sequence of branches, duplicate the back- Another feature of interest is the support of modular program- trackable state5 for each space p end statement, and exe- ming by assembling processes defined in different classes. As an ex- cute each p on a distinct copy of the state to obtain the child ample, we combine Tree.binary and Depth.count with the par- nodes (writing into the variable branches). allel statement: (4) The child nodes are pushed onto the queue (function push). public proc binary_stats = These actions are repeated in the statement loop. Since the pro- module Tree generator = new Tree (); cess binary never terminates and the queue is never empty, the module Depth depth = new Depth (); state space generated is infinite. In summary, a process generates par run generator . binary () || run depth.count() end a sequence of branches during an instant, and a search tree across end instants. The variables generator and depth are annotated with module to Now, we illustrate the use of spacetime variables by introducing distinguish them from spacetime variables. We use the keyword a node and depth counters: run to disambiguate between process calls and method calls. class Node { Last but not least, the disjunctive parallel statement par p || public single_space LMax node = new LMax(0); q end executes two processes in lockstep. It terminates once both public flow count = readwrite node.inc() } processes have terminated. Dually, we have the conjunctive parallel class Depth { statement par p q end which terminates (i) in the next instant public world_line LMax depth = new LMax(−1); if one of p or q terminates, or (ii) in the current instant if both public flow count = readwrite depth.inc () } p and q terminate. The condition (i) implements a form of weak A flow process executes its body p in each instant, the keyword preemption. An instant terminates once every process is paused or flow is a syntactic sugar for loop p; pause end. Both classes terminated. In this respect, pause can be seen as a synchronization work similarly: we increase by one their counters in each instant barrier among processes. with the method inc on LMax. We discuss two kinds of annotations appearing in these examples: read/write annotations and spacetime 4.3 Depth-bounded Search annotations. Now we are ready to program a search strategy in spacetime. We Read/write annotations indicate how a variable is manipulated consider the strategy BoundedDepth which bounds the exploration inside a host function. It comes in three flavors: read x indicates of the search tree to a depth limit: that x is only read by the function, write x that the function only public class BoundedDepth { writes more information in x without reading it, and readwrite single_space LMax limit; x that the value written in x depends on the initial value of x. Ev- public BoundedDepth(LMax limit) { this . limit = limit ; } ery write in x must respect its lattice order and this verification public proc bound_depth = 5 The backtrackable state is the Cartesian product of the variables prefixed by world_- module Depth counter = new Depth (); line (see below). par
PPDP ’19, October 07–09, 2019, Porto, Portugal Pierre Talbot t1 t2 t3 t6 t7 Figure 1: Progression of bounded depth search in each instant with maximum depth equals to 2. run counter . count () way to stop and resume a spacetime program outside of the space- flow time world, which is handy for interacting with the external world. when counter . depth |= limit then prune end In contrast, a pause statement is resumed automatically by the run- end time engine as long as the queue is not empty. end Being aware of the runtime mechanism is helpful to extend BoundedDepth end } to the restart-based strategy iterative depth-first search (IDS) [19]. IDS successively restarts the exploration of the same search tree Whenever depth is greater than or equal to limit we prune the re- by increasing the depth limit. This strategy combines the advan- maining search subtree. The construction of the search tree through tages of breadth-first search (diversifying the search) and depth- time is illustrated in Figure 1 with limit set at 2. The black dots are first search (weak memory consumption). Assuming we have a the nodes already visited, the large one is the one currently being class BoundedTree combining BoundedDepth and Tree, we pro- visited and the white ones are those pushed onto the queue. gram IDS in the host language as follows: The disjunctive parallel composes two search trees by union, whereas the conjunctive parallel composes them by intersection. public static void main(String[] args ) { For example, if we have binary() || bound_depth(), the search for(int limit =0; limit < max_depth(); limit ++) { tree obtained is exactly the one of binary(), while binary() BoundedTree tree = new BoundedTree(new LMax(limit)); bound_depth() prunes the search tree at some depth limit. Over StackLR queue = new StackLR (); two branches, the statement prune || space p creates a sin- SpaceMachine machine = new SpaceMachine(tree.search (), queue); gle branch space p, while prune space p creates a pruned machine.execute (); }} branch. This is made clear in Section 5.2 where we formalize these We introduce additional examples of search strategies in Section 6, composition rules. and show how to combine two restart-based strategies in space- time. 4.4 A Glimpse of the Runtime The class Tree is processed by the spacetime compiler which com- 5 SEMANTICS OF SPACETIME piles every process into a regular Java method. For example, the We develop the semantics of spacetime independently from the process binary is compiled into the following Java method: host language (Java in the previous section). To achieve that, we suppose the program is flattened: every module definition and pro- public Statement binary () { cess call are inlined, and no recursion is allowed in processes. We return new Loop( obtain a lighter abstract syntax of the spacetime statements for- new Sequence(Arrays. asList ( malized as follows (p, q are processes, x, y are identifiers, and T is new SpaceStmt(new Nothing ()), a host type): new SpaceStmt(new Nothing ()), new Delay(CompletionCode.PAUSE)))); } hp, qi ::= T x →| |↓ | when x |= y then p else q w |r |rw w |r |rw | f (x 1 , . . . , xn ) The compiled method returns the abstract syntax tree (AST) of | nothing | pause | stop | loop p | p ; q | p || q | p q the process. This AST is then interpreted by the runtime engine | space p | prune SpaceMachine: Spacetime annotations are shorten as follows: → stands for single_- public static void main(String[] args ) { space, for single_time and ↓ for world_line.6 Read/write an- Tree tree = new Tree (); notations are given by w for write, r for read and rw for readwrite. StackLR queue = new StackLR (); Without loss of generality, we encapsulate the interactions between SpaceMachine machine = new SpaceMachine(tree.binary (), queue); spacetime and its host language in function calls. machine.execute (); } 5.1 Behavioral Semantics We parametrize the runtime engine by the queue StackLR: a tra- The semantics of spacetime is inspired by the logical behavioral ditional stack exploring the tree in depth-first search from left to semantics of Esterel, a big-step semantics, as defined in [6, 29]. The right. Importantly, it means that the spacetime program is generic semantic rules of spacetime defining the control flow of processes with regard to the queueing strategy. The method execute returns either when the spacetime program terminates, the queue becomes 6 These symbols reflect how the variables evolve in the search tree. For example, ↓ empty or we reach a stop statement. This latest statement offers a depicts an evolution from the root to a leaf of the tree along a path.
Spacetime Programming PPDP ’19, October 07–09, 2019, Porto, Portugal (for example loop or pause) are similar to those in Esterel. We is defined as follows: adapt these rules to match the two novel aspects of spacetime: Universe = Space × Compl × B ∗ (i) Storing lattice-based variables in one of the three memories (instead of Esterel’s Boolean signals). Given U ∈ Universe, we define the projections U S , U k and U B (ii) Defining a structure to collect and compose the (pruned) respectively mapping to the space, completion code and the se- V branches created during an instant. quence of branches. We also write U V instead of U S , U → instead → The rules proper to spacetime are specific to either (i) or (ii). of U S and similarly for and ↓. Given the set of outputs produced by a program, a derivation in the behavioral semantics is a proof that a program transition is 5.2 Search Semantics valid. The behavioral transition rule is given as: In this section, we use the following relevant subset of spacetime: O′ hp, qi ::= p ; q | p || q | p q | space p | prune | α Q, L ⊢ p −−−−→ p ′ I ⊔O where p, q ∈ Proc with Proc the set of all the processes, and α is where the program p is rewritten into the program p ′ under (i) the an atomic statement which is not composed of other statements. queue Q equipped with a queueing strategy (pop,push), (ii) the set We can extend the definitions given below to the full spacetime of locations L ⊂ Loc providing a unique identifier to every dec- language without compositional issues. laration of variable, (iii) the input I , and (iv) the outputs O and We give the semantics of the search tree statements with a branch O ′ . We denote the set of syntactic variable names (as appearing algebra. We have a set of all branches defined as B = {space w | w ∈ in the source code) with Name, such that Name ∩ Loc = ∅. We Space ↓ } ∪ {prune}. That is to say, a branch is either labelled by a write L ∪Û {ℓ} the disjoint union, which is useful to extract a fresh world_line space or pruned. location ℓ from L. Definition 5.1 (Branch algebra). The branch algebra is defined The goal of behavioral semantics is not to compute an output over a sequence of branches hB ∗ , ◦, ∨, ∧i where all operators are O but to prove that a transition is valid if we already know O. We associative, ◦ is noncommutative, and ∨ and ∧ are commutative. obtain a valid derivation if the output O ′ derived by the semantics The empty sequence hi is the identity element of the three opera- is equal to the provided output O. Conceptually, the behavioral tors. semantics allows processes to instantaneously broadcast informa- tion. In the following, we call the input and output structures uni- The operators ◦, ∨ and ∧ match the commutative and associative verse and we write U ′ for the output O ′ , and U = I ⊔ O for the laws of the semantics of the operators ;,|| and respectively. input/output provided. Sequence composition. Given bi , b j ∈ B with 1 ≤ i ≤ n and Space structure. The variable environment of a program, called its 1 ≤ j ≤ m, the sequence operator ◦ performs the concatenation of space, stores the spacetime variables. The spacetime annotations two sequences of branches as follows: are given by the set spacetime = {→, , ↓}. The set of values of a variable is given by its type in the host language, which must hb 1 , . . . , bn i ◦ hb 1′ , . . . , bm ′ i = h b 1 , . . . , bn , b 1′ , . . . , bm ′ i be a lattice structure. From the spacetime perspective, we erase Parallel compositions. We define the operators ∨1 and ∧1 to com- the types in the set Value which is the disjoint union of all types, bine two branches and then lift these operators to sequences of and we delegate typing issues to the host language. Putting all branches. Two sequences of branches are combined by repeating the pieces together, the set of spacetime variables Var is the poset the last element of the shortest sequence when the sizes differ. {⊤} ∪ (spacetime × Value). We need a distinct top element ⊤ for Given w, w ′ ∈ Space ↓ and b ∈ B, we define the disjunctive parallel representing variables that are merged with a different spacetime operators ∨1 between two branches and ∨ between two sequences or type—this can be checked at compile-time. of branches as follows: Given a set of locations Loc, the lattice of the spaces of the pro- b ∨1 prune =b gram is defined as Space = Store(Loc, Var). The element ⊥ is the space w ∨1 space w ′ = space w ⊔ w ′ empty space. Given a space S ∈ Space, we define the subsets of the ′ ′ hb 1, . . . , bn i ∨ hb 1 , . . . , bm i = single space variables with S → , the single time variables with S h b 1 ∨1 b 1′ , . . . , bn−1 ∨1 bm−1 ′ , b n ∨1 b m ′ i if n = m and the world line variables with S ↓ . In addition, given a variable 1 ′ 1 ′ 1 ′ h b 1 ∨ b 1 , . . . , bn−1 ∨ bm , bn ∨ bm i if n > m (st, v) ∈ S(ℓ) at location ℓ, we define the projections S st (ℓ) = st and S V (ℓ) = v to respectively extract the spacetime and the value The case where m > n is tackled by the commutativity of ∨. The of the variable. S V (ℓ) maps to ⊥ if ℓ is undefined in S. conjunctive parallel operators ∧1 and ∧ are defined similarly but for prune: Universe structure. A universe incorporates all the information pro- b ∧1 prune = prune duced during an instant including the space, the completion code and the sequence of branches. The completion code models the This algebra allows us to delete, replace or increase the informa- state of a process at the end of an instant: normally terminated tion in a branch. For example, given a process p: (code 0), paused in the current instant with pause (code 1) or stopped • p (space nothing ; prune) deletes every branch in the user environment with stop (code 2). We denote the set of created by p but the first. completion codes with Compl = h{0, 1, 2}, ≤N i. We describe the se- • p (space nothing ; prune ; space nothing) quence of branches B ∗ in the next section. The universe structure deletes the second branch.
PPDP ’19, October 07–09, 2019, Porto, Portugal Pierre Talbot nothing pause stop ⊥, 0 hi ⊥, 1, hi ⊥, 2, hi Q, { } ⊢ nothing −−−−−→ nothing Q, { } ⊢ pause − −−−−− → nothing Q, { } ⊢ stop −−−−−− → nothing U U U hcall loop when-true H′ U′ U′ f (ℓ1a 1 , . . . , ℓna n ) −−−−−−−→ →v Q, L ⊢ p −−→ p ′ Uk , 0 U V (ℓ1 ) U V (ℓ2 ) ։ true Q, L ⊢ p −−→ p ′ host(U S ) U U (space(H ′ ),0, hi) U′ ′ U′ Q, { } ⊢ f (ℓ1a 1 , . . ., ℓna n ) −−−−−−−−−−−−− → nothing Q, L ⊢ loop p −−→ p ; loop p Q, L ⊢ when ℓ1 |= ℓ2 then p else q −−→ p ′ U U U when-false var-decl U′ ′ U ′′ V U (ℓ1 ) U (ℓ2 ) ։ v V v = false ∨ v = unknown Q, L ⊢ q −−→ q U ′ = ({(ℓ, ( , ⊥T ))}, 0, hi) Q, L ⊢ p[x → ℓ] −−−→ p ′ U U U′ ′ U ′ ⊔U ′′ ′ Q, L ⊢ when ℓ1 |= ℓ2 then p else q −−→ q Q, L ∪Û {ℓ } ⊢ T x ; p −−−−−−→ T x ; p U U start-var-decl→↓ U ′′ prune st , x ∈ Name U ′ = ({(ℓ, (st, ⊥T ))}, 0, hi) Q, L ⊢ p[x → ℓ] −−−→ p ′ U (⊥,0, hprunei) Q, { } ⊢ prune −−−−−−−−−−−→ nothing st U ′ ⊔U ′′ st ′ U Q, L ∪Û {ℓ } ⊢ T x ; p −−−−−−→ T ℓ ; p U resume-var-decl→↓ (→, ⊥T ) if st =→ U ′′ space-pruned ℓ ∈ Loc v= U ′ = ({(ℓ, v)}, 0, hi) Q, L ⊢ p −−−→ p ′ (↓, π 2 (pop(Q ))(ℓ)) if st =↓ U U B , hspace W i U ′ ⊔ U ′′ (⊥,0, hspace ⊥i) Q, L ⊢ T ℓ st ; p −−−−−−−→ T ℓ st ; p ′ Q, { } ⊢ space p −−−−−−−−−−−−−→ nothing U U space enter-seq U′ U′ U B = hspace W i ⊥, { } ⊢ p −−−−−−−−−−→ p ′ U ′k = 0 U ′→ = U ′ =∅ Q, L ⊢ p −−→ p ′ U ′k , 0 U ⊔(W ,0, hi) U (⊥,0, hspace U ′↓ i) U′ Q, { } ⊢ space p −−−−−−−−−−−−−−−→ nothing Q, L ⊢ p ; q −−→ p ′ ; q U U next-seq par∨ B ′ U′ ′ ′k ′ U ′′ ′ U′ U ′′ U =B◦B Q, L ⊢ p −−−−−−−−−−→ p U =0 Q, L ⊢ q −−−−−−−−−−−→ q Q, L ⊢ p −−→ p ′ Q, L′ ⊢ q −−−→ q ′ (U S ,U k , B) (U S ,U k , B ′ ) U U U ′ ⊔◦ U ′′ U ′ ⊔∨ U ′′ Q, L ∪Û L′ ⊢ p ; q −−−−−−−→ q ′ Q, L ∪Û L′ ⊢ p || q −−−−−−−−→ p ′ || q ′ U U par∧ exit-par∧ U′ U ′′ U′ U ′′ Q, L ⊢ p −−→ p ′ Q, L′ ⊢ q −−−→ q ′ U ′k , 0 ∧ U ′′k , 0 Q, L ⊢ p −−→ p ′ Q, L′ ⊢ q −−−→ q ′ U ′k = 0 ∨ U ′′k = 0 U U U U U ′ ⊔∧ U ′′ U ′ ⊔∧ U ′′ Û L′ ⊢ p q −−−−−−−−→ p ′ q ′ Q, L ∪ Û L′ ⊢ p q −−−−−−−−→ nothing Q, L ∪ U U Figure 2: Behavioral semantics rules of spacetime. • p || (prune ; space q ; prune) increases the informa- pair of functions (host, space) such that host maps the space S into tion in the second branch by q. the host environment H and vice versa. We write e ։ v when the We can also obtain any permutation of a sequence of branches with space of the program is not modified. We explain each fragment of a suited push function. The only operation not supported is weak- the semantics in the following paragraphs. ening the information of one branch. We have yet to find a use-case The axioms nothing, pause and stop set the completion code for such an operation. respectively to terminated, paused and stopped. We leave the out- put space and branches empty. 5.3 Semantics Rules The main interaction with the host language is given by the rule hcall. The function f and its arguments are evaluated in the host The semantics rules of spacetime are given in Figure 2. We isolate H′ version of the input/output space, written host(U S ). The properties host computations by relying on the host transition rule e −−−→ →v guaranteed by the spacetime semantics depend on the properties H which reduces the expression e into the value v with the input/out- fulfilled by the host functions. put host environment H and the output environment H ′. The in- The rule loop simulates an iteration of the loop by extracting terface between spacetime and the host language is realized by a and executing the body p outside of the loop. We guarantee that
Spacetime Programming PPDP ’19, October 07–09, 2019, Porto, Portugal H′ inc(ℓ0rw ) −−−−−−→ →v host(S 2 ) hcall (space(H ′ ),0, hi) Q, { } ⊢ inc(ℓ0rw ) − −−−−−−−−−−−− → nothing (S 2 ,0, hi) space ({},0, hspace S 2 i) Q, { } ⊢ space inc(ℓ0rw ) −−−−−−−−−−−−−−→ nothing U V (ℓ0 ) 1 ։ true (S 1 ,0, hspace S 2 i) hcall when-true H′ ({},0, hspace S 2 i) inc(ℓ0rw ) −−−−−−→ →v Q, { } ⊢ when ℓ0 |= 1 then space inc(ℓ0rw ) −−−−−−−−−−−−−−→ nothing host(S 1 ) (S 1 ,0, hspace S 2 i) (space(H ′ ),0, hi) Q, { } ⊢ inc(ℓ0rw ) −−−−−−−−−−−−−−→ nothing (S 1 ,0, hspace S 2 i) exit-par-∨ (S 1 ,0, hspace S 2 i) Q, { } ⊢ (when ℓ0 |= 1 then space inc(ℓ0rw )) inc(ℓ0rw ) −−−−−−−−−−−−−−→ nothing U ′ = ({(ℓ0, (↓, 0))}, 0, hi) (S 1 ,0, hspace S 2 i) start-var-decl→↓ U ′ ⊔ ({(ℓ0 ,(↓,1))},0, hspace {(ℓ0 , (↓, 2))}i) Q, {ℓ0 } ⊢ LMax x↓ ; ((when x |= 1 then space inc(xrw )) inc(xrw )) −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− → nothing ({(ℓ0 ,(↓,1))},0, hspace {(ℓ0 , (↓, 2))}i) Figure 3: An example of derivation in the behavioral semantics. p is not instantaneous by forbidding the completion code k to be of single_space variables are transferred from one instant to the equal to 0. next by the reaction rules introduced in the next section. The conditional rules when-true and when-false evaluate the entailment result of x y to execute either p or q. In case the 5.3.2 Semantics of search statements. The statement prune is an entailment status is unknown, which happens if x and y are not axiom creating a single pruned branch. For space p, we have two ordered, we promote unknown to false. This is reminiscent of the cases: either we execute p under the input/output branch hspace W i closed world assumption in logic programming: “what we do not (rule space), or if another process prunes this branch, we avoid ex- know is false”. ecuting p (rule space-pruned). The execution of the space state- ment does not impact the variables in the current instant, which 5.3.1 Semantics of spacetime variables. The variable declaration is materialized by setting the space to ⊥ in the output universe. In rules register the variables in the space or queue memory. A vari- addition, we require that p terminates instantaneously, only writes able’s name x must be substituted to a unique location ℓ. Loca- into world_line variables and does not create nested branches. tions are necessary to distinguish variables with the same name in To specify the sequential and parallel statements, we extend the space and queue—this is possible if the scope of the variable is join over Universe with a branch operator. We have (S, k, B) ⊔∧ re-entered several times during7 and across instants. In the rules (S ′, k ′, B ′) equals to (S ⊔ S ′, k ⊔ k ′, B ∧ B ′), and similarly for ◦ and var-decl and start-var-decl→↓, we extract a fresh location ℓ ∨. from L and substitute x for ℓ in the program p, which is written To formalize the sequence p ; q, we have the rule enter-seq p[x → ℓ].8 The substitution function is defined inductively over which tackles the case where p does not terminate during the cur- the structure of the program p. We give its two most important rent instant, and the rule next-seq where p terminates and q is rules: executed. The disjunctive parallel statement p || q derives p and ℓ if x = y q concurrently and merges their output universes with ⊔∨ (rule y[x → ℓ] 7→ par∨ ). Finally, the conjunctive parallel statement p q is similar y ifx , y T y st ; p if x = y to || when none of p or q terminates (rule par∧ ). However, if one (T y st ; p)[x → ℓ] 7→ process terminates, we rewrite the statement to nothing which T y st ; p[x → ℓ] if x , y prevents this statement to be executed in future instants (rule exit- It replaces any identifier equals to x by ℓ, and stops when it reaches par∧ ). Note that the semantics of composition in space of || and a variable declaration with the same name. match their respective semantics of composition in time. For single_time variables, we create a new location in each in- stant (var-decl ). For single_space and world_line variables, 5.3.3 An example of derivation. We illustrate the mechanics of the we create a new location only during the first instant of the state- behavioral semantics with a short example: ment (start-var-decl→↓), and the next instants reuse the same LMax x↓ ; ((when x |= 1 then space inc (xrw )) inc(xrw )) location (resume-var-decl→↓). In the first instant, the values are initialized to the bottom ele- Two processes communicate over the variable x. The first creates ment ⊥T of the lattice T . In the next instants, we retrieve the value a branch incrementing x by one if it is greater than 1, while the of a world_line variable in the queue by popping one node, and second increments x in the current instant. To derive this process then extracting the value at location ℓ from that node. The values in the behavioral semantics, we set the input/output universe to U = ({(ℓ0, (↓, 1))}, 0, hspace {(ℓ0, (↓, 2))}i) and attempt to prove 7 This is a problem known as reincarnation in Esterel [6]. 8 The that the output universe (the structure above the arrow) is equal variable declaration must be evaluated with regard to its body, this is why the body p follows the declaration. We can transform any variable declaration Type x st to U . For clarity, we set S 1 = {(ℓ0 , (↓, 1))} and S 2 = {(ℓ0 , (↓, 2))}. which is not followed by any statement to Type x st ; nothing. The derivation is given in Figure 3. We notice that the statement
PPDP ’19, October 07–09, 2019, Porto, Portugal Pierre Talbot react later write additional information on x, so this program should be U′ causal(p) Q, Li ⊢ p −−→ p ′ Q ′ = push(Q, U ′B ) accepted. Hi H′ The causality analysis symbolically executes an instant of a pro- U ′k = 1 and Q ′ is not empty i + 1, L ⊢ hQ ′, p ′ i ֒−−→ hQ ′′, p ′′ i cess, yielding the set of all symbolic paths reachable in an instant. H H ′′ = {(j, U ′′ ⊔ (U ′→, 0, hi)) | (j, U ′′ ) ∈ H ′ } It also symbolically executes the paths of all branches generated in {(i,U ′ )}⊔H ′′ each instant. For space reason, we only show the most important i, L ⊢ hQ, p i ֒−−−−−−−−−−→ hQ ′′, p ′′ i H part of the causality analysis: the properties that a path must fulfil to be causal. A path is a sequence of atomic statements ha 1 , . . . , an i exit-react causal(p) where ai is defined as: U′ ′ w |r |rw w |r |rw Q, Li ⊢ p −−→ p Q = push(Q, U ′B ) ′ U ′k , 1 or Q ′ is empty hatomi ::= x y | f (x 1 , . . . , xn ) Hi {(i,U ′ )} For example, the process when x |= y then f (x r ) else д(x r ) i, L ⊢ hQ, p i ֒−−−−−−→ hQ ′, p ′ i generates two paths: hx y, f (x r )i for the then-branch, and hy H x, д(x r )i for the else-branch. A path p is causal if for all atoms ai ∈ p the following two conditions hold. Figure 4: Reaction rules of spacetime. First, for each entailment atom ai = x y we require: ∀z b ∈ Vars(pi +1.. |p | ), z = y =⇒ b = r (1) space is derived with the input/output space S 2 instead of S 1 . Op- with Vars(p) the set of all variables in the path p. It ensures all erationally, it implies that the branch must be evaluated at the end remaining accesses on y to be read-only. of the current instant. Second, for each function call ai = f (x 1b 1 , ..., xnb n ) and each ar- 5.4 Semantics Across Instants gument xkb k of f we require: A spacetime program is automatically executed until it terminates, ∀z b ∈ Vars(pi +1.. |p | ), xk = z ∧ (bk = r ∨bk = rw) =⇒ b = r (2) stops or its queue of nodes becomes empty. Therefore, we must lift the transition rule to succession of instants, which gives the Whenever a variable is accessed with read or readwrite, it can following reaction rule: only be read afterwards. A consequence is that a variable cannot be accessed by two readwrite during a same instant. H′ i, L ⊢ hQ, pi ֒−−→ hQ ′, p ′ i Definition 5.2 (Causal process). A process is causal if for all its H instants i, every path p in the instant i is causal ((1) and (2) hold). where the state hQ, pi is rewritten into the state hQ ′, p ′i with Q a queue with a queueing strategy (pop, push), and p a process. In 5.6 Reactivity, Determinism and Extensiveness addition, we have: (i) a counter of instants i ∈ N, (ii) a sequence of We now only consider causal spacetime programs. In this section, sets of locations L ∈ Store(N, Loc) where Li ∈ L is the set of loca- we sketch the proofs that the semantics of spacetime is determinis- tions at the instant i, (iii) the sequence of input/output universes tic, reactive and an extensive function during and across instants. H ∈ Store(N, Universe) where Hi is the input/output at the instant Importantly, these properties only hold if the underlying host func- i, and (iv) the sequence of output universes H ′ ∈ Store(N, Universe). tions meet the same properties. The two first properties are typical The lifting to sequence of universes is inspired by ReactiveML [24]. of the synchronous paradigm and are defined as follows. The reaction rules are defined in Figure 4. The rule react models the passing of time from one paused instant to the next. Of interest, Definition 5.3 (Determinism and reactivity). For any state hQ, pi, we notice that the values of the single_space variables are joined the derivation into all of the future universes. We also observe that the two rules H′ 0, L ⊢ hQ, pi ֒−−→ hQ ′, p ′i react and exit-react are exclusive on the termination condition. H We now discuss the side condition causal(p) which performs the is deterministic (resp. reactive) if there is at most (resp. at least) causality analysis of the program in each instant. one proof tree of the derivation. Lemma 5.4. The semantics of spacetime is reactive and determin- 5.5 Causality Analysis istic. Causality analysis is crucial to prove that spacetime programs are reactive, deterministic and extensive functions. An example of non- The proofs are given in Appendices A.1 and A.2. They essentially reactive program is when x |= y then f(write y) end. The verify the completeness and disjointness of the rules. problem is that if we add information in y, the condition x |= y Lemma 5.5. The semantics of spacetime is extensive over its space might not be entailed anymore, which means that no derivation in during an instant. the behavioral semantics is possible. This is similar to emitting a signal in Esterel after we tested its absence. Due to the lattice order Proof. Any write in the space is done through a variable decla- on variables, we can however write on a value after an entailment ration or a host function. The declaration rules only add more infor- condition, consider for example when x |= y then f(write x) mation into the space by using the join operator ⊔. Otherwise, this end. Whenever x |= y is entailed, it will stay entailed even if we property depends on the extensiveness of the host functions.
You can also read