THE ADVANCED TOKAMAK MODELING ENVIRONMENT (ATOM) FOR FUSION PLASMAS
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
The Advanced Tokamak Modeling Environment (AToM) for Fusion Plasmas by J. Candy1 on behalf of the AToM team2 1 General Atomics, San Diego, CA 2 See presentation Presented at the 2018 SciDAC-4 PI Meeting Rockville, MD 23-24 July 2018 1 Candy/SciDAC-PI/July 2018 AT M
AToM Modeling Scope and Vision Present-day tokamaks Upcoming burning plasma Future reactor design DIII-D ITER DEMO 2 Candy/SciDAC-PI/July 2018 AT M
AToM (2017-2022) Research Thrusts • AToM0 was a 3-year SciDAC-3 project (2014-2017) • AToM is a new 5-year SciDAC-4 project (2017-2022) • The scope of AToM is broad, with six research thrusts 3 Candy/SciDAC-PI/July 2018 AT M
AToM (2017-2022) Research Thrusts • AToM0 was a 3-year SciDAC-3 project (2014-2017) • AToM is a new 5-year SciDAC-4 project (2017-2022) • The scope of AToM is broad, with six research thrusts scidac.github.io/atom/ 4 Candy/SciDAC-PI/July 2018 AT M
AToM (2017-2022) Research Thrusts • AToM0 was a 3-year SciDAC-3 project (2014-2017) • AToM is a new 5-year SciDAC-4 project (2017-2022) • The scope of AToM is broad, with six research thrusts scidac.github.io/atom/ 1 AToM environment, performance and packaging 2 Physics component integration 3 Validation and uncertainty quantification 4 Physics and scenario exploration 5 Data and metadata management AT M 6 Liaisons to SciDAC partnerships 5 Candy/SciDAC-PI/July 2018
AToM Team Institutional Principal Investigators (FES) Jeff Candy General Atomics Mikhail Dorf Lawrence Livermore National Laboratory David Green Oak Ridge National Laboratory Chris Holland University of California, San Diego Charles Kessel Princeton Plasma Physics Laboratory Institutional Principal Investigators (ASCR) David Bernholdt Oak Ridge National Laboratory Milo Dorr Lawrence Livermore National Laboratory David Schissel General Atomics 6 Candy/SciDAC-PI/July 2018 AT M
AToM Team Funded collaborators (subcontractors in green) O. Meneghini, S. Smith, P. Snyder, D. Eldon, E. Belli, M. Kostuk GA W. Elwasif, M. Cianciosa, J.M. Park, G. Fann, K. Law, D. Batchelor ORNL N. Howard MIT D. Orlov UCSD J. Sachdev PPPL M. Umansky LLNL P. Bonoli MIT Y. Chen UC Boulder R. Kalling Kalling Software AT M A. Pankin Tech-X 7 Candy/SciDAC-PI/July 2018
AToM Conceptual Structure 1 Access to experimental data 2 Outreach (liaisons) to other SciDACs 9 Candy/SciDAC-PI/July 2018 AT M
AToM Conceptual Structure 1 Access to experimental data 2 Outreach (liaisons) to other SciDACs 3 Verification and validation, UQ, machine learning 10 Candy/SciDAC-PI/July 2018 AT M
AToM Conceptual Structure 1 Access to experimental data 2 Outreach (liaisons) to other SciDACs 3 Verification and validation, UQ, machine learning 4 Support HPC components 11 Candy/SciDAC-PI/July 2018 AT M
AToM Conceptual Structure 1 Access to experimental data 2 Outreach (liaisons) to other SciDACs 3 Verification and validation, UQ, machine learning 4 Support HPC components 5 Framework provides glue Adapted from Fig. 24 of Report of the Workshop on Integrated Simulations for Magnetic Fusion Energy Sciences (June 2-4, 2015) 12 Candy/SciDAC-PI/July 2018 AT M
Tokamak physics spans multiple space/timescales Core-edge-SOL (CESOL) region coupling CESOL Core Edge SOL Profile AT M Ψ 13 Candy/SciDAC-PI/July 2018
Fidelity Hierarchy (Pyramid) Range of models all the way up to leadership codes One-off heroic simulation Physics Leadership-class computing Development highest fidelity simulations Calibrate Inform Physics Reduced models for validation Validation Train Inform Physics Machine-learning models for optimization & real-time control Application 14 Candy/SciDAC-PI/July 2018 AT M
Strive for true WDM capability Core-edge-SOL (CESOL) region coupling • Iterative solution procedure to match boundary conditions between regions • 15 components (equilibrium, transport, heating) coupled • Please visit posters by Park and Meneghini 1.5 Te Ti ne nn 1.0 0.5 Z (m) 0.0 −0.5 −1.0 −1.5 AT M 1 R (m) 2 1 R (m) 2 1 R (m) 2 1 R (m) 2 15 Candy/SciDAC-PI/July 2018
AToM Supports two core-edge integrated workflows OMFIT-TGYRO and IPS-FASTRAN • OMFIT-based core-edge (FAST) workflow: − Workflow manager with flexible tree-based data handling/exchange − Can use NN-accelerated models for EPED/NEO/TGLF − Transport solver based on TGYRO+TGLF 16 Candy/SciDAC-PI/July 2018 AT M
AToM Supports two core-edge integrated workflows OMFIT-TGYRO and IPS-FASTRAN • OMFIT-based core-edge (FAST) workflow: − Workflow manager with flexible tree-based data handling/exchange − Can use NN-accelerated models for EPED/NEO/TGLF − Transport solver based on TGYRO+TGLF • IPS-based core-edge-SOL (HPC) workflow: − Framework/component architecture using existing codes − File-based communication (plasma state) − Multi-level (HPC) parallelism − Transport solver based on FASTRAN+TGLF 17 Candy/SciDAC-PI/July 2018 AT M
AToM Supports two core-edge integrated workflows OMFIT-TGYRO and IPS-FASTRAN • OMFIT-based core-edge (FAST) workflow: − Workflow manager with flexible tree-based data handling/exchange − Can use NN-accelerated models for EPED/NEO/TGLF − Transport solver based on TGYRO+TGLF • IPS-based core-edge-SOL (HPC) workflow: − Framework/component architecture using existing codes − File-based communication (plasma state) − Multi-level (HPC) parallelism − Transport solver based on FASTRAN+TGLF 72 users in NERSC atom repository 18 Candy/SciDAC-PI/July 2018 AT M
AToM Supports two core-edge integrated workflows (1) OMFIT-TGYRO Fast self-consistent stationary whole device modeling OMFIT Exploration Initialization GA-system code MODEL-PROFILES Core profiles and pedestal structure Turbulent TGYRO transport TGLF-NN Pedestal structure Impurity transport EPED1-NN STRAHL Neoclassical transport NEO-NN Bootstrap current Current and Closed boundary Stability power sources equilibrium NEOjbs-NN GPEC/GATO/ELITE NBEAMS/FREYA/TORBEAM EFIT/VMOMS/TEQ 19 Candy/SciDAC-PI/July 2018 AT M
AToM Supports two core-edge integrated workflows ����������������������������������������������������� (2) IPS-FASTRAN ������������������� �������������������������������� Plasma State 2-D Plasma State FASTRAN EFIT NUBEAM C2 GTNEUT ADAS/PREACT Plasma Neutral Atomic TGLF DCON TORAY Transport Transport Reaction NCLASS GATO GENRAY Driver: Iteration to d/dt = 0 Driver: Time stepping, Steady-State ������ ����������� EPED State TOQ ELITE BALOO Driver: Root finding of growth rate>1 ��������� ����������������������������� 20 Candy/SciDAC-PI/July 2018 AT M
AToM Supports two core-edge integrated workflows (2) IPS-FASTRAN: DIII-D high-βN discharge FASTRAN/TGLF EPED1 C2 6 ne(1019/m3) 4 • Manage execution of 15 component codes 2 FASTRAN+TGLF+NCLASS+EPED(ELITE+TOQ)+ 0 NUBEAM+TORAY+EFIT+C2+GTNEUT+CARRE+ 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 6 C2MESH+CHEASE+DCON+PEST3 • Iterative coupling of core, edge, SOL Te(keV) 4 − AToM CESOL workflow 2 • Self-consistent heating and current drive 0 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 − NUBEAM, TORAY, GENRAY 6 • Theory-based except for D/χ in SOL, Zeff and rotation Ti(keV) 4 at pedestal top. 2 21 0 1.7 1.8 1.9 2.0 2.1 2.2 Outer Midplane R (m) 2.3 2.4 Candy/SciDAC-PI/July 2018 AT M
AToM Supports two core-edge integrated workflows (2) IPS-FASTRAN: DIII-D high-βN discharge FASTRAN/TGLF EPED1 C2 6 ne(1019/m3) 4 • Manage execution of 15 component codes 2 FASTRAN+TGLF+NCLASS+EPED(ELITE+TOQ)+ 0 NUBEAM+TORAY+EFIT+C2+GTNEUT+CARRE+ 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 6 C2MESH+CHEASE+DCON+PEST3 • Iterative coupling of core, edge, SOL Te(keV) 4 − AToM CESOL workflow 2 • Self-consistent heating and current drive 0 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 − NUBEAM, TORAY, GENRAY 6 • Theory-based except for D/χ in SOL, Zeff and rotation Ti(keV) 4 at pedestal top. 2 • Accuracy highly dependent on TGLF and EPED 22 0 1.7 1.8 1.9 2.0 2.1 2.2 Outer Midplane R (m) 2.3 2.4 Candy/SciDAC-PI/July 2018 AT M
Application: Present day tokamaks DIII-D (San Diego) Core-edge impurity profile prediction (OMFIT-based) 3.2 4.0 10 Te 3.2 Ti 8 ne 2.4 2.4 6 1.6 1.6 4 0.8 Experiment 0.8 2 TGLF-NN TGLF-NN + EPED1-NN 0.0 0.0 0 1.4.0 3.0 2.0 Carbon VI impurity density 0 3.2 a/Lte experiment Z eff,ped = 4.02 Z eff,ped =2.4 2.68 a/Lti 1.6 a/Lne Z eff,ped = 1.34 0. 8 2.4 1.8 1.2 [1e13 cm-3 ] 0. 6 1.6 1.2 0.8 0.8 0.6 0.4 0. 4 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0. 2 ρ ρ ρ AT M DIII-D 168830 @ 3500 ms 0. 0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 ρ 23 Candy/SciDAC-PI/July 2018
Upcoming burning plasma ITER (Provence, France) ITER steady-state hybrid scenario modeling (IPS-based) 24 Candy/SciDAC-PI/July 2018 AT M
Future reactor design DEMO C-AT DEMO reactor modeling (IPS-based) 25 Candy/SciDAC-PI/July 2018 AT M
Create EPED1-NN neural net from EPED1 model • 10 inputs → 12 outputs • Database of 20K EPED1 runs (2M CPU hours) • normal H mode solution • DIII-D(3K), KSTAR(700), JET(200), ITER(15K), CFETR (1.2K) • Super-H mode solution 0.12 PGH H 0.075 wGH H 0.12 PGH Super-H 0.075 wGH Super-H • EPED1-NN tightly coupled in 0.09 0.060 0.09 0.060 TGYRO 0.06 0.045 0.06 0.045 0.03 0.03 0.030 0.030 GH 0.030.060.090.12 0.030 0.045 0.060 0.075 0.03 0.06 0.09 0.12 0.030 0.045 0.060 0.075 H G 0.12 PG H 0.075 wG H PG Super-H 0.08 wG Super-H super super 0.12 0.09 0.060 0.06 0.08 0.06 0.045 0.03 0.04 0.04 0.030 0.02 0.030.060.090.12 0.030 0.045 0.060 0.075 0.040.080.12 0.020.040.060.08 0.10 0.12 PH H 0.08 wH H 0.12 PH Super-H wH Super-H 0.08 0.09 0.09 0.06 0.06 0.06 0.06 0.04 AT M 0.03 0.03 0.04 0.03 0.06 0.09 0.12 0.040.060.08 0.030.060.090.12 0.040.060.080.10 26 Candy/SciDAC-PI/July 2018
Create TGLF-NN neural net from TGLF reduced model • 23 inputs → 4 outputs • Each dataset has 500K cases from 2300 multi-machine discharges • Trained with TENSORFLOW • Must be retrained as TGLF model is updated • TGLF itself derived from HPC CGYRO simulation ExB 27 Candy/SciDAC-PI/July 2018 AT M
TGLF Centerpiece of all AToM predictive modeling workflows • Reduced model of nonlinear gyrokinetic flux (1 second at 1 radial point) 28 Candy/SciDAC-PI/July 2018 AT M
TGLF Centerpiece of all AToM predictive modeling workflows • Reduced model of nonlinear gyrokinetic flux (1 second at 1 radial point) • Determines quality of profile prediction 29 Candy/SciDAC-PI/July 2018 AT M
TGLF Centerpiece of all AToM predictive modeling workflows • Reduced model of nonlinear gyrokinetic flux (1 second at 1 radial point) • Determines quality of profile prediction • TGLF is the heart of AToM profile-prediction capability − linear gyro-Landau-fluid eigenvalue solver − coupled with sophisticated saturation rule − evaluate quasilinear fluxes over range 0.1 < kθ ρ < 24 i 30 Candy/SciDAC-PI/July 2018 AT M
TGLF Centerpiece of all AToM predictive modeling workflows • Reduced model of nonlinear gyrokinetic flux (1 second at 1 radial point) • Determines quality of profile prediction • TGLF is the heart of AToM profile-prediction capability − linear gyro-Landau-fluid eigenvalue solver − coupled with sophisticated saturation rule − evaluate quasilinear fluxes over range 0.1 < kθ ρ < 24 i • Saturated potential intensity − derived from a database of nonlinear GYRO simulations − database resolves only long-wavelength turbulence: k ρ < 1 θ i 31 Candy/SciDAC-PI/July 2018 AT M
TGLF Centerpiece of all AToM predictive modeling workflows • Reduced model of nonlinear gyrokinetic flux (1 second at 1 radial point) • Determines quality of profile prediction • TGLF is the heart of AToM profile-prediction capability − linear gyro-Landau-fluid eigenvalue solver − coupled with sophisticated saturation rule − evaluate quasilinear fluxes over range 0.1 < kθ ρ < 24 i • Saturated potential intensity − derived from a database of nonlinear GYRO simulations − database resolves only long-wavelength turbulence: k ρ < 1 θ i • 10 million to one billion times faster than nonlinear gyrokinetics 32 Candy/SciDAC-PI/July 2018 AT M
TGLF Ongoing calibration with CGYRO leadership simulations • Theory-based approach – must be calibrated with nonlinear simulations • Predictions validated with ITPA database • Discrepanies: L-mode edge, EM saturation • CGYRO multiscale simulations needed 33 Candy/SciDAC-PI/July 2018 AT M
CGYRO New nonlocal spectral solver for collisional plasma edge • New coordinates, discretization, array distribution − Pseudospectral velocity space (ξ, v) − Fluid limit recovered as ν → ∞ (Hallatschek) e − 5th-order conservative upwind in θ 34 Candy/SciDAC-PI/July 2018 AT M
CGYRO New nonlocal spectral solver for collisional plasma edge • New coordinates, discretization, array distribution − Pseudospectral velocity space (ξ, v) − Fluid limit recovered as ν → ∞ (Hallatschek) e − 5th-order conservative upwind in θ • Extended physics for edge plasma − Sugama collision operator (numerically self-adjoint) − Sonic rotation including modified Grad-Shafranov 35 Candy/SciDAC-PI/July 2018 AT M
CGYRO New nonlocal spectral solver for collisional plasma edge • New coordinates, discretization, array distribution − Pseudospectral velocity space (ξ, v) − Fluid limit recovered as ν → ∞ (Hallatschek) e − 5th-order conservative upwind in θ • Extended physics for edge plasma − Sugama collision operator (numerically self-adjoint) − Sonic rotation including modified Grad-Shafranov • Arbitrary wavelength formulation targets multiscale regime 36 Candy/SciDAC-PI/July 2018 AT M
CGYRO New nonlocal spectral solver for collisional plasma edge • New coordinates, discretization, array distribution − Pseudospectral velocity space (ξ, v) − Fluid limit recovered as ν → ∞ (Hallatschek) e − 5th-order conservative upwind in θ • Extended physics for edge plasma − Sugama collision operator (numerically self-adjoint) − Sonic rotation including modified Grad-Shafranov • Arbitrary wavelength formulation targets multiscale regime • Wavenumber advection scheme (profile shear/nonlocality) 37 Candy/SciDAC-PI/July 2018 AT M
CGYRO New nonlocal spectral solver for collisional plasma edge • New coordinates, discretization, array distribution − Pseudospectral velocity space (ξ, v) − Fluid limit recovered as ν → ∞ (Hallatschek) e − 5th-order conservative upwind in θ • Extended physics for edge plasma − Sugama collision operator (numerically self-adjoint) − Sonic rotation including modified Grad-Shafranov • Arbitrary wavelength formulation targets multiscale regime • Wavenumber advection scheme (profile shear/nonlocality) • Target petascale and exascale architectures (GPU/multicore) − cuFFT/FFTW − GPUDirect MPI on compatible systems − All kernels hybrid OpenACC/OpenMP 38 Candy/SciDAC-PI/July 2018 AT M
CGYRO New nonlocal spectral solver for collisional plasma edge • New coordinates, discretization, array distribution − Pseudospectral velocity space (ξ, v) − Fluid limit recovered as ν → ∞ (Hallatschek) e − 5th-order conservative upwind in θ • Extended physics for edge plasma − Sugama collision operator (numerically self-adjoint) − Sonic rotation including modified Grad-Shafranov • Arbitrary wavelength formulation targets multiscale regime • Wavenumber advection scheme (profile shear/nonlocality) • Target petascale and exascale architectures (GPU/multicore) − cuFFT/FFTW − GPUDirect MPI on compatible systems − All kernels hybrid OpenACC/OpenMP 39 • Generate future database for TGLF edge calibration Candy/SciDAC-PI/July 2018 AT M
Carefully optimized for leadership systems Cori Stampede2 Skylake Titan Piz Daint Architecture CPU CPU CPU CPU/GPU CPU/GPU CPU Model Xeon Phi 7250 Xeon Phi 7250 Xeon Plat 8160 Opteron 6274 Xeon ES-2690 v3 GPU Model Tesla K20X 6GB Tesla P100 16GB Threads/node 272 (128 used) 272 (128 used) 96 16/2688 12/3584 TFLOP/node 3.0 3.0 3.5 1.5 (0.2+1.3) 4.5 (0.5+4.0) Nodes 9668 4200 1736 18688 5320 40 Candy/SciDAC-PI/July 2018 AT M
Measuring Performance versus advertised peak Kernel timning (left) and strong scaling (right) Equal 1.6 PFLOP Increasing fraction of peak 100 str nl field coll Skylake Piz Daint 400 Stampede2 Titan 80 Cori KNL Wallclock time (s) Wallclock time (s) 200 60 100 40 50 20 0 0.1 1 10 Stampede2 Cori Titan Piz Daint Skylake Peak PFLOPS str = field-line streaming nl = nonlinear Poisson Bracket • lower is better (closer to advertised) field = field solve • Xeon (Stampede2 Skylake) performing 41 coll = collisions (implict) Candy/SciDAC-PI/July 2018 well AT M
Excellent OpenMP performance • Results for NERSC Cori KNL (use 128 threads per node) • Almost perfect tradeoff between MPI tasks and OpenMP threads OMP vs MPI strong scaling OMP-MPI tradeoff MPI = 512 OMP = 8 400 800 Wallclock time (s) Wallclock time (s) 400 200 200 100 100 50 1k 2k 4k 8k 16k 2 4 8 16 32 64 AT M Total number of Threads Number of OpenMP threads per MPI task 42 Candy/SciDAC-PI/July 2018
GPUDirect MPI Recently Implemented General Atomics Power9+V100 nodes 43 Candy/SciDAC-PI/July 2018 AT M
Arbitrary-wavelength formulation for multiscale Experimental DIII-D ITER-baseline discharge reproduced Traditional ion-scale domain shown in blue 0.18 0.16 0.14 0.12 Fractional Qe 0.10 0.08 0.06 0.04 0.02 0.00 AT M 0 5 10 15 20 25 30 k y ρs 44 Candy/SciDAC-PI/July 2018
Arbitrary-wavelength formulation for multiscale Experimental DIII-D ITER-baseline discharge reproduced Traditional ion-scale domain shown in blue 0.18 0.16 0.14 0.12 Fractional Qe 0.10 0.08 0.06 0.04 0.02 0.00 AT M 0 5 10 15 20 25 30 k y ρs 45 Candy/SciDAC-PI/July 2018
COGENT: −��� Direct −��� Kinetic Eulerian −��� Edge Simulation Provide � ��� � future theory-based ��� � � transport fluxes ��� ��� � in SOL ��� � ��� Fig. 1. Magnetic flux data and a sample flux-‐aligned COGENT grid: (a) o riginal EFIT data; (b) RBF least square •fit, Kineticwhich ignores the o riginal data transport cross-separatrix b elow z=-‐1.3; (c) smoothed computed by RBF interpolation; (d) a sample COGENT grid. COGENT • Includes 2D potential and Fokker-Planck ion-ion collisions ���������������� ������������������ ��������������� �������������������������� eφ/T0 x300eV 1.5 1 Ti (300%eV) 0.5 ni (5%x%1019% m+3) ���� ���� 0 D,! (1.7%m2/s) !0.5 Er (20%kV/m) !1 !6 !4 !2 0 2 ���� ���� ������ ���� 46 and 2 D s2018 Candy/SciDAC-PI/July elf-‐consistent potential variations. The AT M Fig. 2. Illustrative results of COGENT simulations for cross-‐separatrix plasma t ransport including the effects of Fokker-‐Plank ion-‐ion collisions, anomalous transport,
AToM Use Cases Entry point for collaboration with AToM (UCSD) • Validation and scenario modeling will be organized about benchmark use cases − datasets describing key plasma discharges for component and workflow validation − effective way to benchmark models, track improvements, assess performance Key concept for AToM interaction with other SciDACs 47 Candy/SciDAC-PI/July 2018 AT M
AToM Use Cases Entry point for collaboration with AToM (UCSD) • Validation and scenario modeling will be organized about benchmark use cases − datasets describing key plasma discharges for component and workflow validation − effective way to benchmark models, track improvements, assess performance • Each use case will include − Magnetic equilibria and profile data in accessible format − Repository of calculated quantities (code results) − Provenance documentation (shots/publications/models) Key concept for AToM interaction with other SciDACs 48 Candy/SciDAC-PI/July 2018 AT M
AToM Use Cases Entry point for collaboration with AToM (UCSD) • Validation and scenario modeling will be organized about benchmark use cases − datasets describing key plasma discharges for component and workflow validation − effective way to benchmark models, track improvements, assess performance • Each use case will include − Magnetic equilibria and profile data in accessible format − Repository of calculated quantities (code results) − Provenance documentation (shots/publications/models) • Candidate Use Cases 1 DIII-D L-mode shortfall, ITER baseline, steady-state discharges 2 Alcator C-Mod LOC/SOC plasmas, EDA H-mode toroidal field scan 3 ITER inductive, hybrid, and steady-state scenarios 4 ARIES ACT-1/ACT-2 reactor scenarios Key concept for AToM interaction with other SciDACs 49 Candy/SciDAC-PI/July 2018 AT M
Compliance with the ITER IMAS data model https://gafusion.github.io/omas Stability Transport Equilibrium Pedestal Controller & Optimizer OMFITprofiles TGYRO Experimental profiles Transport + Pedestal TGLF-NN EPED1-NN • Transfer data between components using OMAS (python) Sources & current Impurity Sources ONETWO STRAHL evolution OMAS • API stores data in format compatible with IMAS data model • Use storage systems other than native IMAS EFIT Equilibrium imas 50 Candy/SciDAC-PI/July 2018 AT M
Compliance with the ITER IMAS data model https://gafusion.github.io/omas IMAS is a set of codes, an execution framework, a data schema, data storage infrastructure to support ITER plasma operations and research 51 Candy/SciDAC-PI/July 2018 AT M
Compliance with the ITER IMAS data model https://gafusion.github.io/omas IMAS is a set of codes, an execution framework, a data schema, data storage infrastructure to support ITER plasma operations and research • We confirmed that IMAS has several functional shortcomings − issues with speed, stability, portability, useability 52 Candy/SciDAC-PI/July 2018 AT M
Compliance with the ITER IMAS data model https://gafusion.github.io/omas IMAS is a set of codes, an execution framework, a data schema, data storage infrastructure to support ITER plasma operations and research • We confirmed that IMAS has several functional shortcomings − issues with speed, stability, portability, useability • OMAS solution: − store data according to IMAS schema − do not use the IMAS infrastructure itself − facilitate data translation to/from IMAS schema − lightweight Python library 53 Candy/SciDAC-PI/July 2018 AT M
AToM Environment: Dependency Specification Managing the zoo of physics codes • Component challenge − deal with a zoo of physics codes − legacy/modern, different languages, compiled/interpreted, serial/HPC/leadership • AToM Approach − Add new dependencies in a single location − Generate recipes/specs/etc and build installer packages − Upload packages to package manager, build images Installer Packages Spack Run Environments HPC Installs Conda Dependency Specification Local Installs Pip Docker Image AT M Mac Ports 54 Candy/SciDAC-PI/July 2018
AToM HPC Environment: Spack AToM components installable from AToM Spack repository • Spack manages installation of dependencies − list available packages $ spack list -t atom − install package $ spack install [package] − install AToM tier1 package $ spack install atom-tier1 • CONDA for local instal and distribution of pre-built environment • PIP/MACPORTS provide options for Python/OSX 55 Candy/SciDAC-PI/July 2018 AT M
AToM Environment: Docker Deploy without building −→ up and running quickly Docker Image Linux AToM Tutorials, Examples, Test Data & Benchmarks AToM Components AToM Frameworks Deploys On MacOS AToM Dependencies Linux Application Environment Windows • Single monolithic image • Common user environment across multiple platform • Enables users on nontarget platform to run components locally 56 • OMFIT runtime environment currently available as Docker image Candy/SciDAC-PI/July 2018 AT M
You can also read