An Iterative Proper Orthogonal Decomposition (I-POD) Technique with Application to the Filtering of Partial Differential Equations
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
1 An Iterative Proper Orthogonal Decomposition (I-POD) Technique with Application to the Filtering of Partial Differential Equations D. Yu, S. Chakravorty Abstract—In this paper, we consider the filtering of systems traditional matrix operations. A primary issue with the EnKF governed by partial differential equations (PDE). We adopt is the choice of the ensemble realizations and their number. a reduced order model (ROM) based strategy to solve the This is almost always done in a heuristic fashion. Also, the problem. We propose an iterative version of the snapshot proper orthogonal decomposition (POD) technique, termed I-POD, to prediction stage requires expensive forward simulations of construct ROMs for PDEs that is capable of capturing their the realizations using a solver which can take a significant global behaviour. Further, the technique is entirely data based, amount of time. Thus, real time operation is an issue. In and is applicable to forced as well as unforced systems. We contrast, all the expensive computations for our ROM based apply the ROM generated using the I-POD technique to construct technique, namely POD basis and ROM generation, are reduced order Kalman filters to solve the filtering problem. The methodology is tested on several 1-dimensional PDEs of interest done offline and hence, real time operation is never an including the heat equation, the wave equation and randomly issue given the offline computations. Thus, we may think generated systems. of our approach as a computationally tractable alternative to the EnKF algorithm. Historically, there has been a lot Keywords: Proper Orthogonal Decomposition (POD), Fil- of theoretical research in the Control Systems community tering/ Data Assimilation, Partial Differential Equations. on the estimation and control of systems driven by PDEs [10]–[19]. In fact, it is well known that for linear PDEs, I. I NTRODUCTION an infinite dimensional version of the Kalman filter exists In this paper, we are interested in the filtering/ data which involves the solution of an operator Ricatti equation assimilation in systems that are governed by partial differential [20]. This can be very computationally intensive and may equations (PDE). We take a reduced order model (ROM) be unsuitable for online implementation. In contrast, the based approach to the problem. We propose an iterative major computational complexity of the ROM based technique version of the snapshot proper orthogonal decomposition that we propose is offline and the online computations are (POD) technique that allows us to form an ROM of the essentially trivial thereby making the technique very suitable PDE of interest in terms of the eigenfunctions of the PDE for online implementation. operator. We then apply this ROM, along with the Kalman filtering technique, to form a reduced order filter for the As has been noted before, we take a ROM based approach PDE. The filter is constructed in an offline-online fashion to solving the problem of filtering in PDEs. In particular, we where the expensive computations for the ROM construction rely on the so-called proper orthogonal decomposition (POD), is accomplished offline, while the online part consists more precisely, the snapshot POD technique, to construct of the reduced order Kalman filter which is much more ROMs for the PDE of interest. The POD technique has computational tractable than the full problem. The technique been used extensively in the Fluids community to produce is applied to several one-dimensional partial differential ROMs of fluid physics phenomenon such as turbulence and equations. fluid structure interaction [21]–[24]. There has also been work recently to produce so-called balanced POD models to The problem of estimating dynamic spatially distributed better approximate outputs of interest through an amalgam processes is typically solved using the Ensemble Kalman of the snapshot POD and the balanced model reduction Filter (EnKF) and has been used extensively in the Geophysics paradigm of control theory [25] to produce computationally literature [1], [2] and more recently, in Dynamic Data Driven efficient balanced POD models of the physical phenomenon Application Systems (DDDAS) and traffic flow problems of interest [26], [27]. More recently, there has been work [3]–[9]. The EnKF is a particle based Kalman filter that on obtaining information regarding the eigenfunctions of a maintains an ensemble of possible realizations of the dynamic system based on the snapshot POD, called the dynamic mode map. The Kalman prediction and measurement update steps decomposition (DMD) [28], [29]. However, a couple of issues are performed using ensemble operations instead of the are central to the construction of the snapshot POD technique: 1) at what times do we take snapshots of the process, and D. Yu is a Graduate Student Researcher, Department of Aerospace Engi- 2) the snapshot POD essentially provides a reduced basis neering, Texas A&M University, College Station S. Chakravorty is an Associate Professor of Aerospace Engineering, Texas approximation of the localized behaviour of a system, is A&M University, College Station there a constructive way to infer the global behaviour of a
2 system from the snapshot POD? We propose a randomly only have access to measurements of the field at some sparse perturbed iterative approach to the snapshot POD (I-POD) set of spatial locations in the domain of the process given by which iteratively allows us to sample the process of interest (j) at various different time scales, and with different initial Y (tk , xj ) = CX(tk ) + Vk , (2) condition, thereby answering question 1 above. Further, we where X(tk ) represents the state at the discrete time instant show that this process allows us to theoretically reconstruct tk , and Y (tk , xj ) represents a localized measurement of all the eigenfunctions of the original system using either data the state variable at the sparse set of locations given by from experiments or from numerical simulations (similar (j) xj , j = 1, · · · m, and Vk is a discrete time white noise to the DMD approach) thereby allowing us to infer global process corrupting the measurements at the spatial location behaviour of the system. To the best of our knowledge, our xj . We assume that the the differential operator A is self work is the first that proves that, under certain assumptions, adjoint with a compact resolvent, and thus, A has a discrete the snapshot POD does extract the eigenfunctions of the spectrum with a full set of eigenvectors that forms an underlying operator (though ample empirical evidence is orthonormal basis for the Hilbert space H. Further, we provided, this is not proved in the DMD paper [28]). The assume that the operator A generates a stable semigroup. In I-POD approach is sequential and involves solving a sequence Section IV, we extend the results to non self-adjoint operators. of small eigenvalue problems to get a global description of a large scale system. This is in contrast to techniques such Given the above assumptions, we can discretize the PDE as the balanced POD and Eigensystem Realization Algorithm above using computational techniques such as finite elements (ERA) [30], that require the solution of a very large Hankel (FE)/ finite difference (FD) to obtain a discretized version of singular value decomposition problem. However, in the case the operator equations in Euclidean space
3 A 1. We assume that there is a unique null vector correspond- A 2. Assume that “a” eigenvectors of the matrix A are active ing to A and that the matrix A is Hurwitz, i.e., the system is in the snapshot ensemble X, i.e., stable. a X xi = αji vj , Suppose that we choose some arbitrary initial condition x(0) j=1 and take M snapshots of the system’s trajectory at the time instants t1 < t2 < · · · < tM , where these snapshots need not where a ≤ M and without loss of generality, it is assumed be equi-spaced. Let us denote the data matrix of the stacked that the active eigenvectors consist of the first “a” eigenvec- snapshots by X, i.e., tors. This assumption essentially implies that the number of modes active within the snapshots is less than the number of X = [x1 , x2 , · · · , xM ], snapshots in the ensemble. where xi = x(ti ). Suppose now that the number of snapshots The following result is then true. is much smaller than the dimension of the system, i.e., M
4 eigenvectors corresponding to the non-zero eigenvalues. This write: implies that X 0 X = α0 α = Ṽp Σ̃p Ṽp0 , where Σ̃p contains the N X non-zero POD eigenvalues, and Ṽp contains the corresponding x(t) = eλi t (x(0), vi )vi , −1/2 eigenvectors. Further, T = X Ṽp Σ̃p , and hence, P̃ , as i=1 defined in Eq. 13, is now a × a. Thus, the analysis above where (., .) denotes the inner product in M , i.e., the number of active Thus, the ith component of the system trajectory, i.e., the eigenvectors are more than the number of snapshots. WLOG, contribution of the ith eigenvector, is a Gaussian random let a = N . Then variable with zero-mean and a variance that exponentially decays in time as shown above. Thus, the ith mode is à = T 0 AT = Σ−1/2 p Vp0 X 0 AXVp Σ−1/2 p bound to be active for at least one among the ensemble of = (Σ−1/2 p Vp0 α0 )V 0 AV (αVp Σ−1/2 p ) trajectories. In fact, owing to the Gaussian nature of the = (Σ−1/2 Vp0 α0 ) |{z} Λ (αVp Σ−1/2 ) component, it is true that its absolute value will be above any p p | {z } | {z } given threshold, at any given time, with a finite probability. MxN NxN NxM = β 0 |{z} Γ β , |{z} |{z} MxM MxM MxM Given the results above and assumption A3, we are in a position to outline a procedure that allows us to isolate all where (β, Γ) represents the eigenvalue decomposition of the eigenfunctions at any given timescale. ROM matrix Ã. Note that now owing to the fact that N > M , we can no longer use the uniqueness of the similarity transformation of à to conclude that the transformation T β contains the eigenvectors of A. In fact, some of them might Suppose without loss of generality that T1 > T2 · · · > TK . be the same as the eigenvectors of A, however, it is not Suppose now that we are interested in isolating all the necessary. In particular, theoretically, we cannot conclude eigenfunctions corresponding to the timescale T1 . We choose (1) (1) anything regarding the relationship of T β to the eigenfunctions an initial time t0 and subsequent snapshot times tn , of A. n = 1 : M , such that M > M1 and such that the initial time (1) t0 >> T2 . Thus, the snapshot timing assures us that all the The above proposition and the remark above suggest a eigenvectors at the timescales below T1 will have decayed by technique through which eigenvectors of the system matrix the snapshot times of interest, and thus, the only participating A can be extracted upto any time-scale. First, we make the modes are the eigenfunctions corresponding to timescale T1 . following assumption. Then, using Propositions 1 and 2, we know that we can A 3. We assume that there are K characteristic timescales isolate all the eigenfunctions at the timescale T1 given enough (1) embedded in the matrix A, namely T1 , · · · TK . Let the eigen- snapshot ensembles. In particular, suppose that Xj is the (j) (j) values corresponding to timescale Tj be {λ1 , · · · λMj } and j th snapshot ensemble at timescale T1 . Due to proposition 2, (j) (j) as j → ∞, we know that every eigenfunction in set V (1) is let the corresponding eigenvectors be [v1 , · · · vMj ] ≡ V (j) . bound to be excited. Further, due to the fact that M > M1 , Further, we assume that the timescales are well-separated, (j) (i) it follows using Proposition 1 that the eigenfunctions of i.e., if for some t, eλk t 6= 0, then eλk t ≈ 0 for all i < j. the ROM are the same as the eigenfunctions of A. Thus, The above assumption essentially implies all the eigenvectors every snapshot ensemble gives us some of the eigenvectors corresponding to timescales below a given timescale decay v ∈ V (1) and as j → ∞, we are assured that all possible well before the eigenvectors at the given timescale decay. v ∈ V (1) are recovered. At this point, we also need to make sure that all possible eigenfunctions corresponding to any timescale are excited. The following result assures us of this: Given that we have recovered all the eigenfunctions V (1) corresponding to the longest timescale T1 , we can now it- Proposition 2. Let the initial condition to the linear system eratively recover all the eigenfunctions at all the subsequent in Eq. 5 be chosen according to a Gaussian distribution timescales as follows. Given V (1) , we randomly choose an N (0, σ 2 I). Let the j th such trajectory be denoted by X (j) . initial condition x(0) and form the snapshot ensemble X at Then, every eigenfunction of A is excited almost surely, i.e., (2) (2) snapshot times t0 , · · · tM , such that number of snapshots given any eigenfunction, there is at least one trajectory X (j) (2) such that the eigenfunction is active within the ensemble as M > M2 , and the initial time of the snapshot t0 >> T3 , j → ∞. i.e., such that all eigenfunctions at timescales shorter than T2 are absent in the ensemble. Given the snapshot ensemble X Proof: Due to the eigenvalue decomposition of A, we may we “clean” the snapshots by subtracting the contributions of
5 the eigenfunctions from V (1) , i.e., checking for zero eigenvalues and eigenvectors, i.e., rank M1 deficiency of the snapshot ensemble. (1) (2) (2) (2) (1) (1) X x̃(tj ) = x(tj ) − eλk tj (x0 , vk )vk . (18) k=1 We also note that though theoretically we can extract eigen- functions at all time scales, due to small numerical errors that Consider the “cleaned” snapshot ensemble X̃ consisting of accumulate, practically, we can only extract the eigenfunctions the cleaned snapshots from above. It follows that X̃, by corresponding to the first few timescales. In most applications, construction, only contains eigenfunctions from the set V (2) these first few timescales are typically enough to form a good and thus, following the randomly perturbed POD procedure ROM. outline previously, we can recover all the eigenfunctions in V (2) . Given V (1) and V (2) , we can repeat the cleaning, The I-POD technique is a completely data based technique and randomly perturbed POD procedure, to recursively obtain and does not need knowledge of the system matrix A. all the sets V (n) upto any desired timescale Tn . The above Note that ultimately, the ROM à = T 0 AT , contains all the development can be summarized in the following algorithm: information regarding the eigenfunctions of the operator A −1/2 under the assumptions above. Again note that T = XVp Σp , −1/2 −1/2 Algorithm 1 Algorithm I-POD and thus, it follows that à = T 0 AXVp Σp = T 0 X̃Vp Σp , 1) Given timescales T1 , · · · TK where X̃ is the one time step advanced version of the snapshot 2) Set i = 1, V (0) = φ ensemble X (in the discrete time case), and can be obtained 3) WHILE i ≤ K directly from simulation or experimental data. In the DO continuous time case, X̃ may be obtained as follows: (i) (i) a) Choose snapshot times t0 , · · · tM , such that advance the snapshots by a very short time δT , to obtain (i) t0 >> Ti+1 and M > Mi δX = X 0 −X, where X 0 is the short time advanced ensemble, b) Set j = 1 and then obtain X̃ = AX ≈ δX δT . Hence, the I-POD technique (i) is truly data based (this is similar to the DMD technique [28]). i) Choose x0,j , the initial condition of the j th snapshot ensemble at time scale Ti , from It should also be noted that Proposition 1 does not N (0, σ 2 I) and generate the j th snapshot en- distinguish between forced systems and unforced systems (i) semble Xj since Assumption 2 under which the result is valid only asks ii) Clean all the slower eigenfunctions from the for certain sufficient conditions on the active eigenfunctions snapshot ensemble using Eq. 18, and the pre- of the system in the snapshot ensemble. Since the forced viously extracted eigenfunctions from the sets response of a linear system is also expressed in terms of V (1) , V (2) , · · · V (i−1) the eigenfunctions, the I-POD procedure is valid for forced iii) Isolate the eigenfunctions at timescale Ti , V (i) , systems as well as long as Assumption 2 is valid. Hence, using the snapshot POD. Set j = j + 1 the procedure can be used on experimental data, where the iv) If all eigenfunctions in V (i) have been obtained, system response may be forced. Of course, the issue is that go to step (c), else go to step (i) Assumption 2 underlying Proposition 1 may not be satisfied c) Set i = i + 1 for forced systems. However, in our experiments we do 4) Output the eigenfunctions in sets V (1) , · · · V (K) see that this assumption is indeed satisfied and that we can actually extract the eigenfunctions of the forced system using The development above and the I-POD algorithm can be the I-POD procedure (this can also be seen from the results summarized in the following result. in [28]). Proposition 3. Under assumptions A1-A3, the I-POD algo- Representative results from our experiments are shown in rithm can extract all eigenfunctions V (i) corresponding to any Figure 1. In Figure 1(a) and Figure 1(b) , we compare the given time scale T (i) . actual eigenvalues of a randomly generated 100 × 100 system Remark 2. The timescales T1 , · · · Tk are dependent on with those obtained by the I-POD procedure, for an unforced the Physics and can be inferred from physical insight or as well as a forced (constant forcing) symmetric system. The simulations. The number of snapshots that are required to results show that the I-POD eigenvalues agree very well with extract the eigenfunctions have to be “large enough”. Of the actual eigenvalues. course, it might not be possible to know a priori when M is large enough. However, some amount of trial and error can tell us as to what is a suitable number for M . In fact, a good heuristic measure is to wait long enough before the IV. E XTENSION TO N ON S ELF - ADJOINT O PERATORS first snapshot, such that most modes have decayed and we In the following, we show how the I-POD technique can have lesser number of modes participating than the number be extended to non self-adjoint operators. Again, we restrict of snapshots. In fact, this is a heuristic that is often used in our attention to a suitably fine discretization of the underlying the POD literature [28]. This can easily be construed from infinite dimensional systems given by a large scale matrix the eigenvalue decomposition of the snapshot ensemble by A. Suppose that X is the snapshot data from the simulation
6 0 0 −20 −20 −40 −40 −60 −60 −80 −80 −100 −100 −120 actual eigenvalues −120 actual eigenvalues reduced order model eigenvalues reduced order model eigenvalues −140 −140 0 20 40 60 80 100 0 20 40 60 80 100 (a) Comparison of actual eigenvalues (in blue) with those (b) Comparison of actual eigenvalues (in blue) with those obtained using I-POD (in red) for a symmetric unforced obtained using I-POD (in red) for a symmetric forced 100 × 100 system 100 × 100 system −2 −2 −10 −10 actual eigenvalues actual eigenvalues reduced order model eigenvalues reduced order model eigenvalues −1 −1 −10 −10 0 0 −10 −10 1 1 −10 −10 2 2 −10 −10 3 3 −10 −10 4 4 −10 −10 0 20 40 60 80 100 0 20 40 60 80 100 (c) Comparison of actual eigenvalues (in blue) with (d) Comparison of actual eigenvalues (in blue) with those obtained using I-POD (in red) for a non-symmetric those obtained using I-POD (in red) for a non-symmetric unforced 100 × 100 system forced 100 × 100 system Fig. 1. Eigenvalues extraction results using I-POD of the response (forced/ unforced) of the matrix A. Let Vp where Λ̃ are the eigenvalues of A corresponding to the eigen- and Σp be the eigenvectors and eigenvalues resulting form vectors V . Thus, if we show that P̃ is the inverse of P̃ 0 (V 0 V ), the eigendecomposition of the snapshot data X 0 X, and let then due to the uniqueness of the similarity transformation of −1/2 T = XVp Σp be the corresponding POD transformation. Ã, it follows that P̃ = P −1 and Λ = Λ̃. To show this, note The reduced order model is then given by à = T 0 AT . that: Assuming that the ROM can be diagonalized, we write the P̃ 0 (V 0 V )P̃ = Σ̃−1/2 Ṽp0 α0 (V 0 V )αṼp Σ̃−1/2 . (20) similarity transformation for the ROM, à = P ΛP −1 . Given p p that assumption 2 holds (the number of snapshots are greater Here α0 (V 0 V )α = X 0 X = Vp Σp Vp0 , and therefore, using the than the number of active eigenvectors), we have the following orthogonality of the columns of Vp , it follows that result. P̃ 0 (V 0 V )P̃ = Σ−1/2 p Vp0 Vp Σp Vp0 Vp Σ−1/2 p = I. (21) Proposition 4. The eigenvalues of the ROM Ã, given by the diagonal matrix Λ, are also eigenvalues of the full order Hence, P̃ and P̃ 0 (V 0 V ) are inverses of each other. It can model A, and the corresponding eigenvectors are given by also be easily shown that T P = V . Further, the case when the transformation T P . the number of snapshots is greater than the number of active eigenvectors can be treated identically to the symmetric case Proof: considered in Proposition 1. Let X = V α, as before, where V denotes the active eigen- vectors of A in the snapshots, and α is the coefficient matrix of the eigenvectors for all the snapshots. Let the number of In the non-symmetric case, P −1 T 0 does not contain the active eigenvectors be equal to the number of snapshots. Then left eigenvectors of A as was the case for symmetric systems. In fact, P −1 T 0 is the pseudo-inverse of T P = V , i.e., à = Σ−1/2 p Vp0 α0 V 0 AV αVp Σ−1/2 p , P −1 T 0 = (V 0 V )−1 V 0 . Thus, even though we know the right P̃ 0 (V 0 V ) P̃ eigenvectors through the POD transformation T P , we do not z }| { z }| { have knowledge of the left eigenvectors from POD. = Σ−1/2 p V p 0 0 0 α V V Λ̃ αVp Σ−1/2 p , = P ΛP −1 , (19) In order to gain knowledge of the left eigenvectors, we
7 need to generate data Y from the adjoint simulations, i.e., V. A PPLICATION OF I-POD TO F ILTERING OF PARTIAL using matrix A0 . Using this adjoint simulation data Y , the D IFFERENTIAL E QUATIONS left eigenvectors of A, which are the same as the right eigenvectors of A0 upto a multiplicative constant, can be Consider now the continuous-discrete filtering of the dis- found using the I-POD procedure. In other words, the POD tributed parameter system in the high dimensional discretized is used to get the right eigenvectors of A0 using adjoint setting: simulation data Y thereby providing us knowledge of the ẋ = Ax + w, (23) left eigenvectors of A. Further, random initial conditions can y(tk ) = Cx(tk ) + v(tk ), (24) be used to generate the eigenvalues, as well as the left and right eigenvectors, using the simulation data from A and its where recall that y(tk ) ∈
8 Here, ui are the left eigenvectors while vi are the right follows. Define the transform Vr = [v1 , · · · vNr ] consisting eigenvectors. Then, it follows that: of the retained right eigenmodes and Vl = [u1 , · · · uNr ], the X X retained left eigenmodes. The filtering ROM is the following: E||e(t)||2 = e2λi t E|(x(0), ui )|2 + E|∆w 2 i (t)| , (26) i i ψ̇ = (Vl0 AVr )ψ + Vl0 Bw, ψi (0) = (x(0), ui ), where for notational ease the subscript i is used to denote the y(tk ) = (CVr )ψ + vk , (30) summation from Nr + 1 to N . Then, Z tZ t where ψi represents the ith component of the ROM state ψ. |∆w i (t)|2 = eλi (t−τ ) eλi (t−s) cw w i (τ )ci (s)dτ ds. (27) 0 0 The above system now results in an Nr × Nr filtering prob- Noting that lem with Nr
9 error and 3*sigma at 0.3L error and 3*sigma at 0.3L 320 8 25 1 t=100 20 t=250 300 6 t=1000 0.5 15 4 280 10 2 0 260 5 0 240 0 −0.5 −2 −5 220 −4 −10 −1 200 −6 −15 180 −8 −20 −1.5 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 0 50 100 150 200 0 200 400 600 800 1000 4 x 10 (a) Comparison of ROM filter (in (b) Error and 3σ error bounds at (a) Comparison of ROM filter (in (b) Error and 3σ error bounds at blue) with actual heat profile (in 0.3L blue) with actual wave profile (in 0.3L red) red) error and 3*sigma at 0.5L error and 3*sigma at 0.7L error and 3*sigma at 0.5L error and 3*sigma at 0.7L 8 10 0.8 1 6 8 0.6 0.8 6 0.4 4 0.6 4 0.2 2 0.4 2 0 0 0.2 0 −0.2 −2 0 −2 −0.4 −4 −0.2 −4 −0.6 −6 −0.4 −6 −0.8 −8 −8 −1 −0.6 −10 −10 −1.2 −0.8 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 200 400 600 800 1000 0 200 400 600 800 1000 4 4 x 10 x 10 (c) Error and 3σ error bounds at (d) Error and 3σ error bounds at (c) Error and 3σ error bounds at (d) Error and 3σ error bounds at 0.5L 0.7L 0.5L 0.7L Fig. 2. Filter results for heat equation Fig. 3. Filter results for wave equation length of the string is 1m and is divided by M = 100 grid dominant eigenvalues. The full order system is points. The variable c is the wave speed, and takes value 1 Ẋ = AX + Bw, here. y = CX + v, (35) The boundary condition for a fixed end string are: where w is process noise and v is measurement noise. B is ux=0 = 0; ux=L = 0. (33) an identity matrix, and the measurements are taken at five equispaced state components. The time domain T is discretized Displacements are measured at five equispaced points along into 2000 time steps. The filter results are presented in Figure the string. A random initial condition is used for generating the 4 for symmetric A, and in Figure 5 for non-symmetric A. reduced order model, while the initial condition of the string The measurements are taken at five equlspaced points, and the for simulation is: initial condition is random generated for both the simulation x and reduced order system. In Figure 4, comparison between u0 = 20sin(2π ), the real profiles and the reduced model filter at two different M time steps, and the errors and 3σ boundary for the reduced ( x ∂u0 4M if x ≤ M 2 = x (34) model at three different state components, are shown for ∂t 4 − 4M if x ≥ M 2 , symmetric A. Also, these are shown in Figure 5, for non- symmetric A. In Figure 3(a), we show the comparison between the real wave profiles and the reduced model filter at three different VI. C ONCLUSION time steps. The real wave profiles are in red while the filter results are shown in blue. Also, in Figure 3 (b)-(d), the errors In this paper, we have introduced an iterative snapshot POD and the 3σ boundary for the reduced model at three different (I-POD) based approach to form global ROMs for large scale chosen location are shown. linear systems. We have used the I-POD based ROM to form a reduced order Kalman filter for application to the filtering of linear partial differential equations. We have shown the Again, these results show that the ROM based filter is application of the I-POD ROM based filtering technique to capable of getting a good estimate of the wave profile based the heat and wave equations. As can be seen from the results, on the noisy measurements. the linear ROM based filtering performed very well. The most pressing need at this point is to be able to extend the I- POD technique to nonlinear distributed parameter systems. C. Randomly Generated system Further, we will also apply the technique to more realistic 2 and 3-dimensional partial differential equations that may We randomly generated 100 × 100 system matrices, sym- be encountered in practice such as pollutant transport and metric as well as non-symmetric. We assumed there are ten wildfires, and more ambitiously, 3-D fluid flow problems.
10 200 4 t=10 t=100 3 [9] D. B. Work et al., “An ensemble kalman filtering approach to highway 150 2 traffic estimation using gps enabled mobile devices,” in Proc. IEEE Int. 100 1 Conf. Dec. Cont., 2008. 50 0 [10] A.Bensoussan et al., Representation and Control of Infinite Dimensional 0 −1 Systems, Vols. I and II. Birkhauser, 1992. −50 −2 [11] A. Y. Khapalov, “Optimal measurement trajectories for distributed −100 −3 parameter systems,” Systems and Control Letters, vol. 18, pp. 467–477, −150 0 20 40 60 80 100 −4 0 500 1000 1500 2000 1992. [12] ——, “Observability of hyperbolic systems with interior moving sen- (a) Comparison of ROM filter (in (b) Error and 3σ error bounds at sors,” Lecture notes in Control and Info. Sci., vol. 185, pp. 489–499, blue) with actual profile (in red) state component 25 1993. [13] ——, “l∞ exact observability of the heat equation with scanning 6 4 pointwise sensor,” SIAM J. Control and Optim., pp. 1037–1051, 1994. 4 3 [14] J. A. burns et al., “A distributed parameter control approach to optimal 2 filtering and smoothing with mobile sensor networks,” in Proc. Mediter- 2 1 renean Control Conf., 2009. 0 0 [15] J. A. Burns and B. King, “Optimal sensor location for robust control −2 −1 of distributed parameter systems,” in Proc. IEEE Conf. Dec. Control, −4 −2 1993, pp. 3697–3972. −3 [16] A. E. Jai and A. J. Pritchard, Sensors and Actuators in Analysis of −6 −4 0 500 1000 1500 2000 0 500 1000 1500 2000 Distributed Parameter Systems. New York, NY: John Wiley, 1988. [17] Y. Sakawa, “Observability and related problems for partial differential (c) Error and 3σ error bounds at (d) Error and 3σ error bounds at equations of parabolic type,” SIAM J. Control, vol. 13, pp. 14–27, 1975. state component 45 state component 75 [18] M. Krstic and A. Smyshlaev, Boundary Controls of PDEs. Philadelphia, PA: Advances in Design and Control: SIAM, 2008. Fig. 4. Filter results for randomly generated symmetric system [19] R. Vasquez and M. Krstic, Control of Turbulent and Magnetohydrody- namic Flows. Boston, MA: Systems and Control: Foundations and 5000 0.8 Applications, Birkhauser, 2007. 4000 t=10 t=2000 0.6 [20] H. T. Banks and K. Kunisch, Estimation Techniques for Distributed 3000 0.4 Parameter Systems. Boston, MA: Systems and Control: Foundations 2000 1000 0.2 and applications, Birkhauser, 1989. 0 0 [21] M. Loeve, Probability Theory. New York, NY: Van Nostrand, 1955. −1000 −0.2 [22] G. Berkooz et al., “The proper orthogonal decomposition in the analysis −2000 −3000 −0.4 of turbulent flows,” Ann. Rev.. Fl. Mech., vol. 25, pp. 539–575, 1993. −4000 −0.6 [23] K. C. Hall et al., “Proper orthogonal decomposition technque for −5000 0 20 40 60 80 100 −0.8 0 500 1000 1500 2000 transonic unsteady aerodynamic flow,” AIAA Journal, vol. 38, pp. 1853– 1862, 2000. (a) Comparison of ROM filter (in (b) Error and 3σ error bounds at [24] L. Sirovich, “Turbulence and dynamics of coherent structures. part 1: blue) with actual profile (in red) state component 25 Coherent structures,” Quarterly of Applied Mathematics, vol. 45, pp. 561–571, 1987. 0.8 0.8 [25] B. C. Moore, “Principal component analysis in linear systems: Con- 0.6 0.6 trollability, observability and model reduction,” IEEE Transactions on 0.4 0.4 Automatic Control, vol. 26, pp. 17–32, 1981. 0.2 0.2 [26] K. Willcox and J. Peraire, “Balanced model reduction via the proper 0 0 orthogonal decomposition,” AIAA Journal, vol. 40, pp. 2323–2330, 2002. −0.2 −0.2 [27] C. W. Rowley, “Model reduction of fluids using balnced proper orthog- −0.4 −0.4 −0.6 −0.6 onal decomposition,” International Journal fo Bifurcation and Chaos, −0.8 −0.8 vol. 15, pp. 997–1013, 2005. 0 500 1000 1500 2000 0 500 1000 1500 2000 [28] P. J. Schmid, “Dynamic mode decomposition of numerical and experi- mental data,” Journal of Fluid EMchanics, vol. 656, pp. 5–28, 2010. (c) Error and 3σ error bounds at (d) Error and 3σ error bounds at [29] C. W. Rowley et al., “Spectral analysis of nonlinear analysis,” Journal state component 50 state component 75 of Fluid Mechanics, vol. 641, pp. 115–127, 2009. [30] J.-N. Juang, Applied System Identification. Englewood Cliffs, NJ: Fig. 5. Filter results for randomly generated non-symmetric system Prentice Hall, 1994. [31] S. Ahuja and C. W. Rowley, “Feedback control of unstable steady states of flow past a flat plate using reduced-order estimators,” Journal of Fluid Mechanics, vol. 645, pp. 447–478, 2010. R EFERENCES [1] L. Bertino, G. Evensen, and H. Wackernagal, “Combining geostatistics and kalman filtering for data assimiltaion in an estuarine system,” Inverse MEthods, vol. 18, pp. 1–23, 2002. [2] P. L. Houtekamer and H. L. Mitchell, “A sequential ensemble filter for atmosheric data assimilation,” Mon. weath. Rev., vol. 129, pp. 123–137, 1999. [3] G. Evensen, “Incerse methods and data assimilation in nonlinear ocean models,” Physica D, vol. 77, pp. 108–129, 1994. [4] ——, “Advanced data assimilation for strongly nonlinear dynamics,” Mon. weath. Rev., vol. 125, pp. 1342–1354, 1997. [5] G. B. et. al., “Analysis scheme in th ensemble kalman filter,” Mon. Weath. Rev., vol. 126, pp. 1719–1724, 1998. [6] G. Evensen and P. J. V. Leeuwen, “An ensemble kalman smoother for nonlinear dynamics,” Mon. Weather Rev., vol. 128, pp. 1852–1867, 2000. [7] G. Evensen, “The ensemble kalman filter: theoretical formulation and practical implementation,” Ocean Dynamics, vol. 53, pp. 343–367, 2003. [8] J.Mandel et al., “A dynamic data driven wildland fire model,” in Springer Leacture Notes on Computer science (LNCS), 4487, 2007.
You can also read