Table of Contents
In numerical analysis, adaptive mesh refinement is a method of adaptive meshing. Central to any Eulerian method is the manner in which it discretizes the continuous domain of interest into a grid of many individual elements (Garcia-Menendez and Odman, 2011). This grid may be static (e.g., nested grid), established once and for all at the beginning of the computation, or it may be dynamic, tracking the features of the result as the computation progresses. If the computation has features which one wants to track which are much smaller than the overall scale of the problem, and which move in time, then one must either include many more static grids to cover the region of interest, or adopt a dynamic scheme.
Nested-grid modelling techniques have been and still are very popular in atmospheric modelling. In static grid nesting, finer grids are placed or “nested” inside coarser grids. The domain and resolution of each nest are specified prior to the simulation and remain fixed throughout. The nested grid method targets higher resolution for domain features that remain stationary (e.g., terrain, coastline, the location of a power plant or urban area).
The objective of an adaptive grid method is to increase solution accuracy by providing dynamic refinement at regions and instances where accuracy is most dependent on resolution. This can be achieved by restructuring the grid on which solution fields are estimated to better fit the needs of the system being numerically described. Adaptive gridding techniques can be classified as h-refinement or r-refinement depending on the type of grid restructuring employed. The advantages of a dynamic gridding scheme are: (i) Increased computational savings over a static grid approach; (ii) Increased storage savings over a static grid approach; (iii) Complete control of grid resolution, compared to the fixed resolution of a static grid approach, or the Lagrangian-based adaptivity of smoothed particle hydrodynamics.
H-refinement relies on increasing the total number of grid elements (e.g., nodes or cells) within a base grid for which the original structure remains fixed. The technique, also known as mesh enrichment or local refinement, modifies the grid at regions tagged for increased resolution. Frequently, the method is carried out by subdividing grid elements into smaller self-similar components. In Figure 11.1, the example depicting h-refinement shows a single refinement level. A second level of refinement could involve subdividing the refined cells at the center of the domain into four even smaller cells. Generally, a maximum number of refinement levels allowed must be defined.
R-refinement techniques, commonly referred to as mesh moving or global refinement, relocate mesh nodes to regions warranting increased resolution and subsequently increase grid element concentration at the areas with the greatest inaccuracies. However, the total number of grid points is maintained constant. Unlike h-refinement, r-refinement around a region is necessarily accompanied by coarsening at another. Figure 11.1 shows simple schematics of h-refinement and r-refinement at the center of a simple nine element grid.
The objective of increasing solution accuracy through adaptive gridding can only be met if adaptation is driven by an efficient indicator of the solution error in a spatial field. The concept of error equidistribution has been used to describe the adaptive grid process; grids are reconfigured to result in an equal amount of error for all grid elements. However, directly quantifying error is not a straightforward task. Quantitative analysis of resolution-induced error in advection algorithms has been previously investigated. For advection schemes, a Fourier method can estimate error (i.e., numerical diffusion) as a function of grid resolution. The estimation becomes much more complex after integrating other physical and chemical processes into the system, particularly nonlinear transformations. Additionally, exact error quantification cannot be used as a refinement driver a priori, as the error itself depends on grid structure after adaptation.
Previous EUROTRAC and EUROTRAC2 investigations have shown that ozone is increasing on both a local and regional scale in the European boundary layer and that some of the highest regional ozone concentrations in Europe can be observed in Central Europe, including Hungary. During summer ozone episodes, ozone concentrations can exceed legislative standards in Central Europe (see e.g., http://www.emep.int). Ozone and other photo-oxidants can cause damage to human health, natural and agricultural vegetation. Therefore, an important strategic goal is to develop reliable tools to help us estimate these short and long term impacts. Computational models form one set of tools that can be usefully employed to evaluate past episodes and possible future trends in photochemical oxidants. For the computational study of this phenomenon in Hungary, an Eulerian photochemical air pollution model has been developed. This model fully utilizes adaptive gridding methods for modelling chemical transport from multi-scale sources. The model has been elaborated within a flexible framework where both area and point pollution sources can be taken into account with variable resolution, and the chemical transformations can be described by a mechanism of arbitrary complexity. Operational use of the model has revealed that the large cities in the region, Budapest and Vienna emit significant amounts of ozone precursors and highly influence the photo-oxidant concentrations in this region. Budapest, the capital of Hungary, is one of the major sources. This paper reports the latest developments of the model and focuses on the effects of the plume of Budapest on the concentrations of ozone in the surrounding region for a high ozone episode. Although previous studies have used an Eulerian modelling framework to study high ozone concentrations within Europe, this study is the first to describe the photochemical air pollution formation in Hungary in a detailed way, focusing on the emissions of Budapest at high resolution. An emission inventory for Budapest at 1 km resolution and an adaptive grid technique are used to predict the ozone levels in Hungary for the first time.
The model describes the spread of reactive air pollutants in four layers of the troposphere over the Central European region. The vertical mixing of pollutants is approximated by a parameterised description of exchange between the layers. The horizontal dispersion of species is described within an unstructured triangular Eulerian grid framework. The horizontal grid is adaptive, i.e. continuously changes in space and time to minimize the numerical errors. Transient refinement and de-refinement is invoked as necessary throughout the model run according to the estimated spatial errors. The modelled area is a 980 km × 920 km region of Central Europe with Hungary in the centre. The model describes the horizontal domain using a Cartesian coordinate system through the stereographic polar projection of the curved Earth surface onto a plane. The dispersion of species in the horizontal domain is described by the atmospheric transport–reaction equation in two space dimensions. For n chemical species, an n dimensional set of partial differential equations is formed describing the change of concentrations over time and space. These equations are coupled through the non-linear chemical reaction term.
The four horizontal layers of the model are the surface layer (from surface to 50 m), the mixing layer, the reservoir layer and the free troposphere layer. At night, the mixing layer extends to the height determined by the midnight radiosonde data. During the daytime, the height of the mixing layer is assumed to rise smoothly to the height determined by the noon radiosonde data. In the evening, it collapses to the nighttime level. The reservoir layer, if it exists, extends from the top of the mixing layer to an altitude of 1000 m. Vertical mixing and deposition are parameterised according to the vertical stratification of the atmosphere. The species exchange between the layers (i.e. the vertical transport) is modelled in two ways. Exchange between the mixing and the surface layers due to turbulent diffusion and fumigation at the top of the mixing layer are described by ordinary differential equations. These equations have been defined in our recent article. In this way, species exchange takes place between the mixing layer and the reservoir layer or the upper layer if the reservoir layer does not exist.
The wind speed and direction, relative humidity, temperature, and cloud coverage were determined by the meteorological model ALADIN, which is the numerical weather forecasting model of the Hungarian Meteorological Service. The ALADIN model is a hydrostatic, spectral, limited area model using 24 layers for vertical resolution where initial and boundary conditions are determined from a larger scale weather prediction model ARPEGE (Horányi et al., 1996). The model domain for ALADIN covers the Central European region from latitude 43.1°N to 52.0°N and from longitude 10.35°E to 25.1°E. The time resolution of data is 6 hours and the spatial resolution is 0.10 × 0.15 degrees (approximately 10 km × 10 km). In our model, conservative interpolation methods were used to obtain data relevant to a given spatial point on the unstructured grid from the regularly gridded ALADIN meteorological data.
The dry deposition velocity was calculated using the resistance method that is based on the parameterisation of the surface resistance, the boundary layer resistance and the aerodynamic resistance. The model calculated the Monin–Obuhov length from the data of the ALADIN meteorological model.
For Budapest, the emission inventories for CO, NO_{x} and VOCs were provided by the local authorities with a spatial resolution of 1 km × 1 km and also include the most significant 63 emission point sources. For Hungary, the National Emission Inventory of spatial resolution 20 km × 20 km was applied which included both area and point sources. Figure 1 shows the emission inventories of NO_{x} for Budapest and Hungary. Outside Hungary, the emission inventory of EMEP for CO, NO_{x} and VOCs was used, having a spatial resolution of 50 km × 50 km. The emissions data had to be interpolated onto the unstructured grid following each change to the mesh during refinement. This was achieved using the mass conservative method of overlapping triangles. Point sources are averaged into the appropriate grid cell for their location and hence when the grid is refined the definition of point sources improves.
In the present simulations, the GRS chemical scheme was used, although the model allows the utilization of any other reaction scheme (see Table 11.1). The GRS-scheme is a reduced mechanism that was created using a semi-empirical approach; it contains 7 reactions of 7 species. The GRS scheme was developed by comparison with smog chamber data and has been evaluated by comparison with smog chamber data and predictions from more detailed chemical schemes. Previous studies have shown that the scheme performs well for the prediction of ozone in polluted conditions although it can overpredict ozone concentrations in rural locations. The scheme has been selected in the current application for its computational efficiency and because its accuracy can be assumed to be reasonable in the region of interest i.e. down wind of major NOx sources. The rate constants were calculated as described by Derwent and Jenkin and were expressed as mth order rate constants with units (molecule cm^{3})^{m}^{1 }s^{1}. The photolysis rates were parameterized by the following function:
J_{q} = (1 – 0.75N^{3.4}) a_{q} exp (–b_{q} sec Θ), |
(11.1) |
where Θ is the solar zenith angle, N is the cloud coverage, and a_{q}, b_{q} are the rate parameters of reaction q. Temperature dependent rate constants were represented by standard Arrhenius expressions.
The basis of the numerical method is the spatial discretisation of the partial differential equations derived from the atmospheric transport–reaction equation on unstructured triangular meshes. This approach, known as the ‘method of lines’, reduces the set of partial differential equations to a system of ordinary differential equations of one independent variable, time. The model uses the flux limited, cell centred finite volume scheme. The system of ordinary differential equations is integrated by the code SPRINT2D (Hart et al., 1998, Lagzi et al, 2009). Operator splitting is carried out at the level of the non-linear equations by approximating the Jacobian matrix.
The initial unstructured meshes are created from a geometry description using the Geompack mesh generator. These meshes are then refined and coarsened by the Triad adaptivity module. Low and high order solutions are obtained for each species and the difference between them gives a measure of the spatial error. The algorithm identifies the regions of large error by comparison with a user-defined tolerance for the concentration of one or several species. For the ith PDE component on the jth triangle, a local error estimate e_{i,j} (t) is calculated from the difference between the solution using a first order and a second order method. For time dependent PDEs this estimate shows how the spatial error grows locally over a time step. A refinement indicator for the jth triangle is defined by an average scaled error serr_{j} that is considered over all npde PDEs using user supplied absolute and relative tolerances:
, |
(11.2) |
where atol and rtol are the absolute and relative error tolerances, e_{i,j}(t) is the local error estimate of species i over element j, c_{i,j} is the concentration of species i over triangle j and A_{j} is the area of jth triangle. This formulation for the scaled error provides a flexible way to weight the refinement towards any PDE error. In the calculations presented, a combination of errors in species NO and NO_{2} were used as a refinement indicator, because these are primary species and also because their concentrations are very closely related to ozone production. Estimation of the local spatial error of ozone concentration is not an efficient choice, because it would be too late to make refinement decisions on the basis of the detection of a large error in the concentration of a secondary pollutant. On the other hand, concentrations of the VOCs are locally dominated by emissions, and since the available emissions inventory for VOCs has a coarse resolution (50 km × 50 km), the use of VOC concentration as an error indicator is not appropriate. Each triangle that is flagged for refinement is split into four similar triangles. Refined triangles may later be coalesced into the parent triangle when coarsening the mesh.
Table 11.1: The GRS mechanism (T: temperature, Θ solar zenith angle)
Reactions |
Reaction rate constants |
|||||||
ROC |
+ |
hv |
→ |
RP |
+ |
ROC |
k_{1} = 1000exp(–4710/T)J_{3} |
[R11.1] |
RP |
+ |
NO |
→ |
NO_{2} |
k_{2} = 3.7098×10^{-12}exp(242/T) |
[R11.2] |
||
NO_{2} |
+ |
Hv |
→ |
NO |
+ |
O_{3} |
J_{3} = 1.45×10^{2}exp(–0.4 sec Θ) |
[R11.3] |
NO |
+ |
O_{3} |
→ |
NO_{2} |
k_{4} = 1.7886×10^{12} exp(–1370/T) |
[R11.4] |
||
RP |
+ |
RP |
→ |
RP |
k_{5} = 6.7673×10^{12} |
[R11.5] |
||
RP |
+ |
NO_{2} |
→ |
SGN |
k_{6} = 1.00×10^{13} |
[R11.6] |
||
RP |
+ |
NO_{2} |
→ |
SNGN |
k_{7} = 1.00×10^{13} |
[R11.7] |
Table 11.2: Comparisons of CPU times and number of grid cells for each meshing strategy
Grid type |
Relative CPU time |
Number of grid cells |
Coarse |
1 |
353 |
Adaptive |
8.8 |
353–1276 |
fine nested |
13.2 |
2298 |
The simulated period was the beginning of August, 1998. During almost the whole period low cloud coverage, low wind speeds and the high temperatures resulted in high photo-oxidant levels in most of Europe. In Hungary, high ozone concentrations were measured at the K-puszta (48°58'N, 19°33'E) and Hortobágy (47°29'N, 20°56'E) monitoring stations of the Hungarian Meteorological Service. Three simulations, corresponding to three different spatial grid structures, were carried out. The first grid was a coarse fixed one that covered a part of Central Europe as seen in Figure 11.3 (a). The resolution of this coarse grid (level 0) was characterized by an edge length of 70 km. The grid structure of the second type was a fixed fine nested grid (level 2) over Hungary, which had an edge size of 17.5 km (see Figure 11.3 (b)). In the adaptive grid simulations, the refinement was restricted to the area of the nested grid in Figure 11.3 (b) and limited to 2 levels. Therefore, the minimum grid size was identical to that of the fine grid in the nested grid calculations. The initial concentrations of the major species were 0.4 ppb for NO_{2}, 2.0 ppb for NO, 89.3 ppb for O_{3}, and 4.1 ppb for VOC. The initial concentrations were equal in each layer across the whole simulated domain.
The simulation period from noon on the 31^{th} of July to midnight on the 3^{rd} of August, 1998, was chosen. Figure 11.4 illustrates the evolution of the adaptive grid in time. The adaptive grid was initially refined around Budapest, which is the main emission source of the primary pollutants in Hungary. High spatial gradients in NOx concentrations are therefore likely to have formed close to the Budapest region leading to an increase in spatial errors and therefore mesh refinement. This is in part due to the high resolution inventory used in the simulations and raises interesting issues with regards to the influence of mesh resolution within an Eulerian framework. Clearly, where a coarse emissions inventory is used, a certain level of averaging has already taken place, perhaps resulting in a lower sensitivity to solution grid resolution below a certain level, although meteorological events may still lead to steep spatial concentration gradients under certain conditions. The finer the resolution of the emissions inventory, the larger the spatial concentration gradients will be at the edges of the down wind plumes formed. The representation of large point sources represents a particular challenge to Eulerian models. As emissions inventories become finer therefore, the influence of grid resolution on solution accuracy may become more apparent, requiring the use of techniques such as transient grid adaption.
After one and a half simulation days the whole western region of the domain became refined, because this region is more industrialized and emits higher levels of ozone precursors than the other regions of the simulated area. Figure 11.5 shows the calculated ozone concentrations after 3 days of simulation on the 3^{rd} of August, 1998 at 17.00, as a result of three simulations each using a different type of grid. During the simulated period, the southern winds transported the ozone precursors towards the north from Budapest. The simulated ozone concentrations using the fine (Figure 11.5 a) and the adaptive grids (Figure 11.5 b) are very similar. Both simulations show high ozone concentrations in a wide north and northwest region around Budapest, but in the city the ozone concentration is much lower due to the high local NO emissions. In these simulations the plume is bent, follows the direction of the wind and extends up to about 150 km from the city at 17.00. There are significant differences in the predicted peak ozone concentrations between the coarse grid (Figure 11.5 c) and the fine (and adaptive) grid simulations. In general, the simulations using higher resolution grids predict higher peak ozone concentrations than the low resolution ones. The simulations using the coarse grid predict an “ozone ring” around the city and smooth the concentration peaks due to numerical diffusion. The detailed structure of the plume in the South West region of the country is lost in the low resolution simulations.
The comparisons of integrated ozone concentrations across the simulation period are displayed in figure 11.6. We obtain very similar integrated ozone patterns using the fixed nested fine and adaptive grids. In both cases a plume shaped region of lowered ozone concentrations is present to the North East of Budapest with a peak of much higher integrated concentrations at the end of the plume close to the Slovakian border. The ability to model this depletion of ozone results from the use of a high-resolution emissions inventory for Budapest coupled with a grid resolution that is high enough to resolve the resulting plumes. This enables the resolution of nitrogen oxide concentrations in this region leading to the depletion of ozone. In the coarse grid simulation, the integrated ozone concentrations are smoothed in comparison and the plume shaped structure is not present. Hence, although the fixed coarse grid model is the fastest to simulate, it misses key features related to integrated concentrations and would therefore be unsuitable for estimating the long-term impact of ozone.
The ratio of CPU time requirements and number of grid cells for the coarse, adaptive and fine grid models is shown in Table 11.2. The fine and the adaptive models provided similar results, but the former method required 1.5 times more computer time and a significantly higher number of grid cells. Therefore, the adaptive grid model provides an efficient method for the prediction of secondary air pollution formation at the regional scale. Such model could be successfully used for operational purposes due to lower CPU costs.
An adaptive grid model describing the formation and transformation of photochemical oxidants based on triangular unstructured grids has been developed and applied to the simulation of photochemical oxidant formation in Hungary. The model contains a high-resolution emissions inventory for the Budapest region and during the simulation automatically places a finer resolution grid in regions characterized by high concentration gradients and therefore by higher numerical error. Using the adaptive method, grid resolutions of the order of 10 km can be achieved in regional air pollution models without excessive computational effort. The overhead in using such a transient adaptive scheme stem from the need for interpolation of emissions and meteorological data as well as modelled concentrations onto the new grid structure following grid refinement or de-refinement. However, such overheads can be minimized if the grid refinement procedure is not performed for each simulation time-point but is limited to a given time interval which in the current application was 5 minutes. Figure 11.4 demonstrates that in the current application, large parts of the nested domain did not require refinement using the chosen tolerances. If refinement overheads are limited the application therefore shows that transient adaption can provide an efficient alternative to traditional nested grid approaches.
The simulation of a photochemical episode that occurred during August 1998 demonstrates the influence of precursor emissions from Budapest on down-wind ozone concentrations up to 150 km from the city. The comparison of different grid strategies demonstrates that the use of a coarse grid has a tendency to smooth out key features in both local and integrated ozone concentrations due to numerical diffusion. This results in the underestimation of both ozone depletion in high NO_{x} regions, and peak ozone concentrations. The adaptive model predicts similar features to the fixed fine grid model using less CPU time and grid cells. The results therefore indicate the potential for using adaptive models in an operational context for assessing the long-term impact of ozone within Europe.
Following the Chernobyl accident most countries in Europe developed national nuclear dispersion or accidental release models linked to weather prediction models of varying types and resolutions. The performance of a number of these models was evaluated against the ETEX European tracer experiments with no overall modelling strategy emerging from this evaluation as being statistically better than another. A comparative discussion of the available model types will be given in the next section. The second ETEX experiment proved to be a challenge for all the codes, since they failed to predict the location of the tracer plume accurately. Clearly therefore, issues still remain as to what is an appropriate strategy for predicting the impact of an accidental release. Sensitivity tests from the ETEX comparisons showed significant impact of both input data (e.g. three dimensional wind-fields and boundary layer descriptions) and model structure, such as the choice of numerical solution method and model resolution on output predictions. Models are not only used for predictive purposes such as exposure assessment, but are also often used to test our understanding of the underlying science that forms the foundation of the models. Improvements in our understanding are often tested by comparison of the models with measured data. If this is the case then one must be sure that changes made to input data to improve predictions are not compensating for errors that actually arise from the numerical solution of the problem. It follows that models used within the decision making and environmental management context must use not only the best available scientific knowledge and data, but also the best available numerical techniques. By this we mean techniques that minimize the errors induced by the choice of numerical solution method rather than the structure of the physical model. Because of the historical development of environmental predictive models and the investment time required to implement new computational solution strategies, unfortunately the most appropriate numerical method is not always used. Therefore, the aim of this work is not to elaborate yet another numerical dispersion simulation code, but to present the practical application of improved numerical methods which may provide useful developments to some existing modelling strategies. The paper presents the use of adaptive Eulerian grid simulations to accidental release problems and it will be shown by comparison with some current numerical computational methods that improvements in accuracy can be made without significant extra computational expense. We present the use of the model for the prediction of hypothetical accidents from the Paks Nuclear Power Plant (NPP) in Central Hungary and we include a discussion of the comparison of the present method with more traditional modelling approaches.
The Chernobyl release provided a large impetus for the development of accidental release models and several inter-comparisons between different model types have been made. The predominant model types are Lagrangian and Eulerian. The former trace air masses, particles with assigned mass, or Gaussian shaped puffs of pollutants along trajectories determined by the wind-field structures. Lagrangian models have the advantage that they can afford to use high spatial resolution although they rely on the interpolation of meteorological data. Their potential disadvantages are that in some cases they neglect important physical processes and often experience problems when strongly diverging flows lead to uncertainties in long-range trajectories. One example of this is discussed by Baklanov et al. (2002) who modelled the potential transport of radionuclides across Europe and Scandinavia from a hypothetical accident in Northern Russia using an isentropic trajectory model. In their ‘winter’ case studies the isentropic trajectory model simulated much of the deposition when compared to a high-resolution meteorological/dispersion model. In some cases however, additional deposition was predicted by the dispersion model in locations not predicted by the isentropic trajectory model due to the splitting of atmospheric trajectories.
There are several types of Lagrangian trajectory models. One example of a Gaussian puff model is the DERMA model, which uses a multi-puff diffusion parameterization. A Gaussian profile is assumed for the puff in the horizontal direction with complete mixing in the vertical direction within the boundary layer and a Gaussian profile above it. The UK MET office NAME model and the Norwegian SNAP model use a Lagrangian particle formulation, which resolves the trajectories of a large number of particle releases with assigned masses. The NAME model is capable of following a 3D mean wind-field plus a turbulent component with a variety of turbulence parameterizations available ranging from simple eddy diffusion terms to complex random walk descriptions, although the turbulent parameterizations are based on measurements from a single location. Because it does not use a Gaussian puff formulation it can resolve varying wind speeds and directions, skewed turbulence and varying stabilities. The disadvantage of this approach is its computational expense, since a large number of particles must be released when compared to the Gaussian puff approach.
Eulerian models use grid based methods and have the advantage that they may take into account fully 3D descriptions of the meteorological fields rather than single trajectories. However, when used traditionally with fixed meshes, Eulerian models show difficulty in resolving steep gradients. This causes particular problems for resolving dispersion from a single point source, which will create very large gradients near the release. If a coarse Eulerian mesh is used then the release is immediately averaged into a large area, which smears out the steep gradients and creates a large amount of numerical diffusion. The result will be to underpredict the maximum concentrations within the near-field plume and to over estimate the plume width. Close to the source the problem could be addressed by nesting a finer resolution grid to better resolve steep gradients. This need to resolve accurately both near and far field dispersion has been noted previously by for example Brandt et al. (1996) who used a combined approach of a Lagrangian mesoscale model coupled with a long range Eulerian transport model in the development of the DREAM model. The approach requires some kind of interpolation procedure between the two grids. A similar approach was also employed through the point source initialization scheme in the Swedish Eulerian model MATCH. The MATCH model uses a Lagrangian particle model for the horizontal transport of the first 10 h after the point source release with vertical transport being described by an Eulerian framework during this time. Quite a large number of particles need to be released per hour to reduce errors in predicting the vertical transport, thus adding to the computational cost. These multi-scale modelling approaches showed significant improvements in predictions close to the release due to the higher resolution Lagrangian models. However, as with the nested Eulerian modelling approach, they still suffer from the drawback that the plume is averaged into the larger Eulerian grid as soon as it leaves the near source region. Brandt et al. (1996) argue that once the plume has left the near-source region the gradients will become sufficiently smooth as to lead to small errors due to numerical diffusion. This ignores however, the fact that steep gradients may persist at plume edges for large distances from the source due to mesoscale processes. Previous studies have shown that high contamination levels or deposition can exist over small areas several hundred kilometres from the source. Brandt et al. (1996) argue that long-range Eulerian models are not suitable for resolving single source releases alone and demonstrate this through the standard Molenkampf test of a rotating passive puff release in a circular wind-field. More recently, however, it has been shown that adaptive Eulerian methods are very capable of resolving the advection of such single source releases since they can automatically refine the mesh in regions where steep gradients exist. Moreover, they can be more efficient than say nested models, since they refine only where high spatial numerical errors are found and not in regions where grid refinement is not necessary for solution accuracy. This means that for the same computational run-time they may allow higher resolution of the grid in important areas. Eulerian models also provide an automatic framework for the description of mixing processes and non-linear chemical reaction terms. This study will therefore show that adaptive grid methods provide a consistent approach to coupling near-field and long-range simulations for single source accidental releases.
It is clear that one of the most crucial inputs to any dispersion model for point source releases is the underlying meteorological data such as wind-field and boundary layer descriptions. The importance of the horizontal resolution of meteorological data has been demonstrated by many of the simulations of the first ETEX experiment (Bryall and Maryon, 1998). The ETEX experiment was an international tracer campaign during which a passive tracer was tracked across Europe over several days from its release in France by monitoring at a large number of meteorological stations. Many numerical simulations were carried out which demonstrated the need for mesoscale weather modelling. Nasstrom and Pace (1998) for example showed that events resolved by a meteorological model of 45 km but not at 225 km resolution had a significant impact on even long-range dispersion of the ETEX plume. Sorensen (1998) showed that the double structure of the first ETEX plume was picked up by their model when using mesoscale weather predictions but not when using coarser resolution ECMWF data. The difference was attributed to a mesoscale horizontal anti-cyclonic eddy that was not resolved within the ECMWF data. The importance of resolving the vertical structure of wind-speeds and directions was demonstrated by the second ETEX experiment where evidence of decoupling of an upper cloud of pollution from the boundary layer plume was observed. In this case significant concentrations were measured behind the path of the plume predicted by most of the models tested. This behaviour has been attributed to the vertical lofting of the plume by the passage of a front, followed by the transport of the pollution cloud in upper levels of the atmosphere at a lower wind speed that in the boundary layer or even perhaps in a different wind-direction. The MET Office NAME model in this case showed a significant amount of mass above the boundary layer for the second ETEX experiment in contrast to the first one where particles where well distributed throughout the boundary layer. Previous simulations point to several important issues relating to the development of an accidental release model. Firstly, a mesoscale meteorological model must be used in order to capture small spatial scale effects such as frontal passages and small scale horizontal eddies. Secondly, the dispersion model used must be capable of representing, in some way, the possible vertical variations in wind speed and direction, rather than using a single boundary layer description with varying mixing layer height but a single wind-field. Thirdly, the dispersion model must be capable of resolving potentially steep concentration gradients that may be caused by either the single source release or features that may arise due to mesoscale meteorological events. This study will therefore discuss the practical application of an adaptive Eulerian dispersion model when coupled to data from a high resolution mesoscale meteorological model.
This model has been developed within a flexible framework and can therefore simulate both single source accidental releases and photochemical air pollution, where the pollution sources are both area and point sources and the chemical transformations are highly non-linear (Lagzi et al., 2004). A slightly modified version of the model is used here to demonstrate its capacity simulating nuclear dispersion calculations. In the model, the horizontal dispersion of radionuclides is described within an unstructured triangular Eulerian grid framework. The vertical mixing of radionuclides is approximated by a parameterized description of mixing between four layers representing the surface, mixing, reservoir layers and the free troposphere (upper) layer. The horizontal grid is adaptive, i.e. continuously changes in space and time to minimize the numerical errors. The transformation of radionuclides is described within each grid cell by the equations of nuclear decay. Transient refinement and de-refinement is then further invoked as necessary throughout the model run according to spatial errors and chosen refinement criteria. The model domain is represented by an unstructured mesh of triangular elements surrounding each grid point, thus forming a small volume over which the solution is averaged. The model therefore falls into the category of Eulerian models described above, although it is not described by the standard Cartesian mesh approach. The use of adaptivity however, allows the model to overcome the usual problems of the Eulerian approach, since a fine mesh can be used where needed leading to averaging over small volumes. The term ‘unstructured’ represents the fact that each node in the mesh may be surrounded by any number of triangles, whereas in a structured mesh such as a Cartesian mesh, the number of surrounding grid points would be fixed. The use of an unstructured mesh easily enables the adequate resolution of complex solution structures that may be found following point source releases and which may not fall naturally into Cartesian geometrical descriptions. For example, following release, the plume from a point source may be stretched and folded in space due to advection and turbulent mixing. The unstructured triangular mesh then provides an efficient way of adapting around this complex geometry. The initial unstructured triangular meshes used in the model are created from a geometry description using the Geompack mesh generator. The complex nature of atmospheric dispersion problems makes the pre-specification of grid densities very difficult, particularly for forecasting codes where the wind-field is not known in advance. Many complex processes can take place far from the source due to advection, mixing, chemical reactions and deposition, affecting the geometry of the plume and spatial concentration profiles. To this end the model presented here utilizes adaptive gridding techniques, which quantitatively evaluate the accuracy of the numerical solution in space and time and then automatically refine or de-refine the mesh where necessary. As described above, the accuracy of Eulerian models tends to become degraded in regions where the concentration of the pollutant changes steeply in space. The use of transient adaptivity allows us to overcome this problem. It is achieved by using a tree-like data structure with a method of refinement based on the regular subdivision of triangles (local h-refinement). Here an original triangle is split into four similar triangles as shown in Figure 11.7 by connecting the midpoints of the edges.
Hanging or unconnected nodes are removed by joining them with nearby vertices. This finer grid may be later coalesced into the parent triangle to coarsen the mesh if the solution accuracy does not require refinement – for example when the plume has passed or concentration gradients in space are reduced. The use of unstructured meshes allows the model to go from very fine resolution to coarse resolution within small spatial regions. Once a method of refinement has been implemented, a suitable criterion for the application of transient adaptivity must be chosen. The technique used here is based on the calculation of spatial error estimates, allowing a certain amount of automation to be applied to the refinement process. This is achieved by first calculating some measure of numerical error in each species over each triangle. A reliable method for determining the numerical error is to examine the difference between the solution gained using a high accuracy and a low accuracy numerical method. Over regions of high spatial gradient in concentrations the difference between high and low order solutions will be greater than in regions of relatively smooth solution, and refinement generally takes place in these regions. The use of absolute as well as relative tolerances allows the user to define a species concentration below which high relative accuracy is not required. For the current example the choice of this tolerance may be driven by a minimum concentration of concern from a health point of view for example. An integer refinement level indicator is calculated from the scaled error above to give the number of times the triangle should be refined or de-refined. The choice of tolerances will therefore reflect, to a certain extent, a balance between desired accuracy and available computational resources, since tighter tolerances usually lead to a higher number of grid cells. It is also possible within the code for the user to control the maximum number of levels of adaptivity, thus limiting the minimum grid size in regions of very steep gradients i.e. close to the point source. Since the error is applied at the end of time-step it is too late to make refinement decisions. Methods are therefore used for the prediction of the growth of the spatial error using quadratic interpolants. The spatial error can therefore be used to predict within which regions the grid should be refined or coarsened for the next time-step in order to give good spatial accuracy with the minimum computational resource. The application of adaptive rectangular meshes would be also possible, but would be less effective in terms of the number of nodes required in order to achieve high levels of adaptivity. Although the data structures resulting from an unstructured mesh are somewhat more complicated than those for a regular Cartesian mesh, problems with hanging nodes at boundaries between refinement regions are avoided. The use of a flexible discretization stencil also allows for an arbitrary degree of refinement, which is more difficult to achieve on structured meshes.
The model domain covers Central Europe including Hungary with a domain size of 1550 × 1500 km. The model describes the domain using a Cartesian coordinate system through the stereographic polar projection of the curved surface onto a flat plane. Global coordinates are transformed by projecting the surface of the Earth, from the opposite pole onto a flat plane located at the North Pole that is perpendicular to the Earth’s axis. Due to the orientation of the projection plane this transformation places the Cartesian origin at the North Pole. The model includes four vertical atmospheric layers: a surface layer, a mixing layer, a reservoir layer and the free troposphere (upper) layer. The surface layer extends from ground level to 50 m altitude. Above the surface layer is the mixing layer whose height is determined for 0.00 and 12.00 UTC values by radiosonde measurements in Budapest. The night time values are assumed to be identical to the midnight value while the height between 6.30 and 15.00 were linearly interpolated between the night and the highest daytime values. The reservoir layer, if it exists, extends from the top of the mixing layer to an altitude of 1000 m. Different wind-fields are represented for each layer and vertical mixing and deposition are also parameterized. The local wind speed and direction was considered as a function of space and time. These data were obtained from the mesoscale meteorological model ALADIN, which provides data with a time resolution of 6 h and a spatial resolution of 0.10 × 0.15°. The ALADIN model is a hydrostatic, spectral limited area model using 24 vertical layers where initial and boundary conditions are determined from larger scale ECMWF data. The model domain for ALADIN covers the Central European region from latitude 43.1° to 52.0° and longitude 10.35° to 25.1°. The data from ALADIN were interpolated using mass conservative methods to obtain data relevant to a given space and time point on the adaptive model grid. The surface temperature and cloud coverage, relative humidity and wind field for each layer were determined from the ALADIN database. The eddy diffusivity coefficients for the x and y directions were set at 50 m^{2}s^{−1} for all species.
The features of the model are illustrated by the simulation of a hypothetical nuclear accident on 2 August 1998, at time 0.00 in the Paks nuclear power plant. The release of 2.985 kg ^{131}I isotope is assumed at a height of 10 m for 12 h. This isotope decays to the stable ^{131}Xe with a half-life of 6.948×10^{5} s. In the current simulations, only the radioactive decay of ^{131}I was calculated and the change of activity of this isotope was simulated. The model is capable of the simultaneous simulation of the spread of several hundred isotopes, taking into account all radioactive decays.
Figures 11.8 and 11.9 show initial grids for the adaptive and coarse grid calculations. The typical length of a triangle edge is 106 km and around the Paks NPP a somewhat finer resolution initial grid was used. The figure shows that the modelled area includes Hungary and covers the neighbouring countries within about 600 km from the border to all directions. Wind-field maps corresponding to this period indicated that the wind at all heights from the surface to 3000 m blew from south-east in the first 36 h. After this, the wind direction changed to northwest in the Czech Republic and Poland, but was almost unchanged in the other areas. The application of adaptive gridding methods was compared to the application of fixed grids for the hypothetical release described above. Three different fixed grid schemes were tested: (i) the initial grid (shown in Figure 11.8) was not refined during the calculations; (ii) a high resolution (triangle edge length 6.6 km) nested grid was placed around the source within a 250 km × 250 km area as shown in Figure 11.9; (iii) resolution (triangle edge length 6.6 km) fixed grid was used within the whole area.
Figure 11.10 e-h show the simulated surface layer activity of isotope ^{131}I using adaptive gridding. A continuous release was assumed in the first 12 h and therefore there is a continuous plume after 12 h (Figure 11.10 e). After 12 h, the cloud is separated from the source and travels towards the northwest (t_{0}+24 h, Figure 11.10 f). After 36 h (Figures 11.10 g, h) it starts to drift backwards towards the southeast. Figures 11.10 a–d show that the region of increased grid resolution continuously follows the path of the contaminated air. The typical grid size in the non-contaminated area remained at ca 106 km, but in the highly contaminated area it was automatically reduced to 6.6 km (the minimum allowed length at the simulation) by the transient adaptation routine, allowing better spatial resolution in critical areas.
Figures 11.11–11.14 compare the simulation results using the three fixed grid schemes at simulation times 12, 24, 36, and 48 h after the accident. The fine grid calculation has the lowest numerical error and therefore these results are the basis of comparison for the other mesh strategies. The coarse grid calculations show high numerical diffusion at all times. The result is that the initial plume is smeared over a much wider area than in the fine grid simulation as shown in Figure 11.11 a. The nested grid calculation results agree well with the fine grid results while the contamination is in the highly resolved area; however high numerical diffusion appears when the contamination leaves the nested grid. After 24 h of simulation the predictions from the nested grid calculation are more similar to those of the coarse grid calculation than the fine one (Figures 11.12-11.14), showing that the initial advantage of using a nested grid can quickly be lost. The agreement between the adaptive grid solutions (Figure 11.9 e–h) and the fine grid results (Figures 11.11 c, 11.12 c, 11.13 c and 11.14 c) is very good at t_{0}+12 and t_{0}+24 h and acceptable at t_{0}+36 and t_{0}+48 h simulation times. The shape of the isotope cloud is well predicted with some smearing occurring at the edges. The adaptive grid simulation is significantly closer to the fine grid calculation than both the course and nested grid predictions even after 48 h as shown by comparing figures 11.10 and 11.14.
Nuclear dispersion models have to predict accurately both the maximum level of contamination and the expected arrival time of the contamination peak. The four models presented above can be compared not only by using activity maps at fixed times, but also by plotting the time history of activity at a given location. Three cities, all northwest of Paks were selected: Székesfehérvár (located in Hungary, 101 km from Paks), Bratislava (Slovakia, 247 km), Ostrava (Czech Republic, 415 km). Figure 11.15 shows the time dependence of the surface layer activity of ^{131}I in these three cities, calculated by the four gridding schemes. Again, the fine grid calculations were assumed to be the most accurate. At Székesfehérvár, the fine and the nested grid results are identical and the adaptive grid predicted only a slightly higher peak at a slightly earlier arrival time. The coarse grid calculation predicted a much higher double peak arriving much earlier. Bratislava is not affected directly by the plume, as it is well visible from both the fine and the adaptive grid results shown here and in figures 11.10 e and 11.11 c. Because of the smearing of the plume due to numerical diffusion however, the coarse and the nested grid models erroneously predict the appearance of the contamination in Bratislava. In a real situation such a prediction may result in the unnecessary evacuation of the population. In Ostrava, the fine and the adaptive models both predict high contamination at similar arrival times. The nested and the coarse models forecast much lower contamination, which arrives later than predicted by in the fine grid simulations. Based on the latter models, the population may not be evacuated in a dangerous situation. Additionally, the first two models predict the return of the polluted cloud with very good agreement, but such a feature is not present in the course and nested simulation results. In all cases the fine and adaptive simulation results were in good qualitative agreement, while the coarse and nested grid simulations would have advised wrong measures in the chosen locations. The ratio of time requirements are 1:30:19:532 for the coarse, adaptive, nested and fine grid models, respectively. The adaptive simulation is therefore only slightly more computationally expensive than the nested grid whilst providing a step change in accuracy. Note, that using the time and space dependent adaptive grid is more time consuming than the application of the fixed nested grid due to the use of the adaptivity algorithm, even though the nested method uses more grid points. There is some overhead therefore in calculating the refinement criteria and interpolating the solution onto the new mesh each time the mesh is updated. However, the fine and the adaptive models provided similar results, but the application of the former required 17.5 times more computer time. The implications could be that given limited computer resources, the adaptive model provides reliable results quickly thus allowing ample time for emergency response. The fine grid model would provide similar results, but possibly too late.