SG++-Doxygen-Documentation
|
This tutorial contains examples with increasing complexity to introduce you to the combigrid module.
The combigrid module is quite separated from the other modules. It only refers to the base module for things like DataVector and DataMatrix. First, we need some includes.
The first thing we need is a function to evaluate. This function will be evaluated on the domain \([0, 1]^d\). This particular function can be used with any number of dimensions.
Here comes the first and very simple example.
Let's increase the number of points by two for each level.
Now create the operation object that handles the evaluation. The evaluation mode is quadrature, so it will approximate the integral of f over [0, 1]^d. It uses Leja points with 1 + 2*l points in level l. The level starts from zero, higher level means finer grid. Slower growth of the number of points per level means that the total number of points used can be controlled better.
Now, we compute the result. The parameter 2 means that grid at level-multi-indices with a 1-norm (i.e. sum of entries) less than or equal to 2 are used. In our 3D case, these are exactly the levels (0, 0, 0), (1, 0, 0), (2, 0, 0), (1, 1, 0), (1, 0, 1), (0, 1, 0), (0, 2, 0), (0, 1, 1), (0, 0, 1) and (0, 0, 2).
Now compare the result to the analytical solution:
We can also find out how many function evaluations have been used:
The next example uses interpolation.
This time, we use Clenshaw-Curtis points with exponentially growing number of points per level. This is helpful for CC points to make them nested. Nested means that the set of grid points at one level is a subset of the set of grid points at the next level. Nesting can drastically reduce the number of needed function evaluations. Using these grid points, we will do polynomial interpolation at a single point.
Now create a point where to evaluate the interpolated function:
We can now evaluate the interpolation at this point (using 3 as a bound for the 1-norm of the level multi-index):
Now compare the result to the actual function value:
Again, print the number of function evaluations:
Now, let's do another (more sophisticated) evaluation at a different point, so change the point and re-set the parameter. This method will automatically clear all intermediate values that have been computed internally up to now.
The level manager provides more options for combigrid evaluation, so let's get it:
We can add regular levels like before:
The result can be fetched from the CombigridOperation:
We can also add more points in a regular structure, using at most 50 new function evaluations. Most level-adding variants of levelManager also have a parallelized version. This version executes the calls to func in parallel with a specified number of threads, which is okay here since func supports parallel evaluations. Since func takes very little time to evaluate and the parallelization only concerns function evaluations and not the computations on the resulting function values, parallel evaluation is not actually useful in this case. We will use 4 threads for the function evaluations.
We can also use adaptive level generation. The adaption strategy depends on the subclass of LevelManager that is used. If you do not want to use the default LevelManager, you can specify your own LevelManager:
It was necessary to use setLevelManager(), because this links the LevelManager to the computation. Now, let's add at most 80 more function evaluations adaptively. Note that the adaption here is only based on the result at our single evaluation point, which might give inaccurate results. The same holds for quadrature. In practice, you should probably do an interpolation at a lot of Monte-Carlo points via CombigridMultiOperation (cf. Example 3) and then transfer the generated level structure to another CombigridOperation or CombigridMultiOperation for your actual evaluation (cf. Example 4).
Now, we want to do interpolation at multiple evaluation points efficiently.
Use Leja points unlike example 2 and use CombigridMultiOperation for evaluation at multiple points.
One method to pass the data is via a std::vector<sgpp::base::DataVector>. We will use 2 interpolation points.
Let's use the simple interface for this example and stop the time:
We can also set the parameters via a DataMatrix containing the vectors as columns:
This example shows how to store and retrieve computed function values.
First, we create a function that prints a string if it is called:
Next, we create a FunctionLookupTable. This will cache the function values by their DataVector parameter. Note, however, that even slightly differing DataVectors will lead to separate function evaluations.
Do a normal computation...
The first (and most convenient) possibility to store the data is serializing the lookup table. The serialization is not compressed and will roughly use 60 Bytes per entry. If you have lots of data, you might consider compressing it.
It is also possible to store which levels have been evaluated:
Restore the data into another lookup table. The function is still needed for new evaluations.
A new evaluation with the same levels does not require new function evaluations:
Another less general way of storing the data is directly serializing the storage underlying the operation. This means that retrieval is faster, but it only works if the same grid is used again. For demonstration purposes, we use loggingFunc directly this time without a lookup table:
This example shows how to apply different operators in different dimensions.
First, we want to configure which grid points to use in which dimension. We use Chebyshev points in the 0th dimension. To make them nested, we have to use at least \(n = 3^l\) points at level \(l\). This is why this method contains the prefix exp. CombiHierarchies provides some matching configurations for grid points. If you nevertheless need your own configuration or you want to know which growth strategy and ordering fit to which point distribution, look up the implementation details in CombiHierarchies, it is not difficult to implement your own configuration.
Our next set of grid points are Leja points with linear growth ( \(n = 1 + 3l\)). For the last dimension, we use equidistant points with boundary. These are suited for linear interpolation. To make them nested, again the slowest possible exponential growth is selected by the CombiHierarchies class.
The next thing we have to configure is the linear operation that is performed in those directions. We will use polynomial interpolation in the 0th dimension, quadrature in the 1st dimension and linear interpolation in the 2nd dimension. Roughly spoken, this means that a quadrature is performed on the 1D function that is the interpolated function with two fixed parameters. But since those operators "commute", the result is invariant under the order that the operations are applied in. The CombiEvaluators class also provides analogous methods and typedefs for the multi-evaluation case.
To create a CombigridOperation object with our own configuration, we have to provide a LevelManager as well:
The two interpolations need a parameter \((x, z)\). If \(\tilde{f}\) is the interpolated function, the operation approximates the result of \(\int_0^1 \tilde{f}(x, y, z) \,dy\).
This example shows how to apply different operators in different dimensions.
In some applications, you might not want to have a callback function that is called at single points, but on a full grid. One of these applications is solving PDEs. This example provides a simple framework where a PDE solver can be included. It is also suited for other tasks. The core part is a function that computes grid values on a full grid.
To create a CombigridOperation, we currently have to use the longer way as in example 5.
We have to specify if the function always produces the same value for the same grid points. This can make the storage smaller if the grid points are nested. In this implementation, this is true. However, it would be false in the PDE case, so we set it to false here.
Now create an operation as usual and evaluate the interpolation with a test parameter.
Now create a point where to evaluate the interpolated function:
We can now evaluate the interpolation at this point (using 3 as a bound for the 1-norm of the level multi-index):
Now compare the result to the actual function value:
This example shows how to use the variance refinement method that uses the PCE transformation for variance computation on each subspace.