 SG++-Doxygen-Documentation
gettingStarted.py (Start Here)

This tutorial contains examples with increasing complexity to introduce you to the combigrid module.

The combigrid module is quite separated from the other modules. It only refers to the base module for things like DataVector and DataMatrix. At the beginning of the program, we have to import the pysgpp library.

1 from itertools import product, combinations, permutations,\
2  combinations_with_replacement
3 from pysgpp.extensions.datadriven.uq.dists import J, Beta, Uniform
4 from pysgpp.extensions.datadriven.uq.plot.colors import initialize_plotting_style, \
6 from pysgpp.extensions.datadriven.uq.plot.plot3d import plotSG3d
7 import math
8 import pysgpp
9
10 from matplotlib.patches import Rectangle
11 from mpl_toolkits.mplot3d import Axes3D
12 import numpy as np
13
14 import matplotlib.pyplot as plt
15 import numpy as np
16

The first thing we need is a function to evaluate. This function will be evaluated on the domain $$[0, 1]^d$$. This particular function can be used with any number of dimensions. The input parameter of the function is of type pysgpp.DataVector, so do not treat it like a list. The return type is float.

1 def f(x):
2  product = 1.0
3  for i in range(x.getSize()):
4  product *= math.exp(-x[i])
5  return product
6
7
8 def g(x):
9  return np.prod([4 * xi * (1 - xi) for xi in x.array()])
10

We have to wrap f in a pysgpp.MultiFunction object.

1 func = pysgpp.multiFunc(g)

Let's use a 3D-function.

1 d = 2
2

# Example 1: Leja quadrature with linear growth of grid points

Here comes the first and very simple example.

1 def example1():

Let's increase the number of points by two for each level.

1  growthFactor = 2

Now create the operation object that handles the evaluation. The evaluation mode is quadrature, so it will approximate the integral of f over [0, 1]^d. It uses Leja points with 1 + 2*l points in level l. The level starts from zero, higher level means finer grid. Slower growth of the number of points per level means that the total number of points used can be controlled better.

1  operation = pysgpp.CombigridOperation.createLinearLejaQuadrature(d, func, growthFactor)

Now, we compute the result. The parameter 2 means that grid at level-multi-indices with a 1-norm (i.e. sum of entries) less than or equal to 2 are used. In our 3D case, these are exactly the levels (0, 0, 0), (1, 0, 0), (2, 0, 0), (1, 1, 0), (1, 0, 1), (0, 1, 0), (0, 2, 0), (0, 1, 1), (0, 0, 1) and (0, 0, 2).

1  result = operation.evaluate(2)

Now compare the result to the analytical solution:

1  print("Quadrature result: " + str(result) + ", analytical solution: " + str(math.pow(1.0 - 1.0 / math.e, d)))

We can also find out how many function evaluations have been used by accessing the storage which stores computed function values:

1  print("Number of function evaluations: " + str(operation.numGridPoints()))

# Example 2: Polynomial interpolation on nested Clenshaw Curtis grids

The next example uses interpolation.

1 def example2():

This time, we use Clenshaw-Curtis points with exponentially growing number of points per level. This is helpful for CC points to make them nested. Nested means that the set of grid points at one level is a subset of the set of grid points at the next level. Nesting can drastically reduce the number of needed function evaluations. Using these grid points, we will do polynomial interpolation at a single point.

1  #operation = pysgpp.CombigridOperation.createExpClenshawCurtisPolynomialInterpolation(d, func)
2  operation = pysgpp.CombigridOperation.createExpL2LejaPolynomialInterpolation(d, func)

Now create a point where to evaluate the interpolated function:

1  evaluationPoint = pysgpp.DataVector([0.1572, 0.6627, 0.2378])

We can now evaluate the interpolation at this point (using 3 as a bound for the 1-norm of the level multi-index):

1  result = operation.evaluate(3, evaluationPoint)

Now compare the result to the actual function value:

1  print("Interpolation result: " + str(result) + ", function value: " + str(func(evaluationPoint)))

Again, print the number of function evaluations:

1  print("Function evaluations: " + str(operation.numGridPoints()))

Now, let's do another (more sophisticated) evaluation at a different point, so change the point and re-set the parameter. This method will automatically clear all intermediate values that have been computed internally up to now.

1  evaluationPoint = 0.4444
2  print("Target function value: " + str(func(evaluationPoint)))
3  operation.setParameters(evaluationPoint)

The level manager provides more options for combigrid evaluation, so let's get it:

1  levelManager = operation.getLevelManager()

We can add regular levels like before:

The result can be fetched from the CombigridOperation:

1  print("Regular result 1: " + str(operation.getResult()))
2  print("Total function evaluations: " + str(operation.numGridPoints()))

We can also add more points in a regular structure, using at most 50 new function evaluations. Most level-adding variants of levelManager also have a parallelized version. This version executes the calls to func in parallel with a specified number of threads, which is okay here since func supports parallel evaluations. Since func takes very little time to evaluate and the parallelization only concerns function evaluations and not the computations on the resulting function values, parallel evaluation is not actually useful in this case. We will use 4 threads for the function evaluations.

2  print("Regular result 2: " + str(operation.getResult()))
3  print("Total function evaluations: " + str(operation.numGridPoints()))

We can also use adaptive level generation. The adaption strategy depends on the subclass of LevelManager that is used. If you do not want to use the default LevelManager, you can specify your own LevelManager:

1  operation.setLevelManager(pysgpp.AveragingLevelManager())
2  levelManager = operation.getLevelManager()

It was necessary to use setLevelManager(), because this links the LevelManager to the computation. Now, let's add at most 60 more function evaluations adaptively. Note that the adaption here is only based on the result at our single evaluation point, which might give inaccurate results. The same holds for quadrature. In practice, you should probably do an interpolation at a lot of Monte-Carlo points via CombigridMultiOperation (cf. Example 3) and then transfer the generated level structure to another CombigridOperation or CombigridMultiOperation for your actual evaluation (cf. Example 4).

2  print("Adaptive result: " + str(operation.getResult()))
3  print("Total function evaluations: " + str(operation.numGridPoints()))

We can also fetch the used grid points and plot the grid:

1  grid = levelManager.getGridPointMatrix()
2  fig = plt.figure()
3  ax = fig.add_subplot(111, projection='3d')
4  gridList = [[grid.get(r, c) for c in range(grid.getNcols())] for r in range(grid.getNrows())]
5
6  ax.scatter(gridList, gridList, gridList, c='r', marker='o')
7  ax.set_xlabel('x')
8  ax.set_ylabel('y')
9  ax.set_zlabel('z')
10  plt.show()
11

# Example 3: Evaluation at multiple points

Now, we want to do interpolation at multiple evaluation points efficiently.

1 def example3():

Use Leja points unlike example 2 and use CombigridMultiOperation for evaluation at multiple points.

1  operation = pysgpp.CombigridMultiOperation.createLinearLejaPolynomialInterpolation(d, func)

We slightly deviate from the C++ example here and pass the interpolation points via a DataMatrix. We will use 2 interpolation points. IMPORTANT: For python, the parameters matrix needs to be transposed

1  firstParam = [0.2, 0.6, 0.7]
2  secondParam = [0.3, 0.9, 1.0]
3
4  params = np.array([firstParam, secondParam])
5  parameters = pysgpp.DataMatrix(params.transpose())
6  print (parameters)

Let's use the simple interface for this example and stop the time:

1  stopwatch = pysgpp.Stopwatch()
2  result = operation.evaluate(3, parameters)
3  stopwatch.log()
4  print("First result: " + str(result) + ", function value: " + str(func(pysgpp.DataVector(firstParam))))
5  print("Second result: " + str(result) + ", function value: " + str(func(pysgpp.DataVector(secondParam))))

# Example 4: Serialization and lookup tables

This example shows how to store and retrieve computed function values. At first, we create a function that prints a string if it is called. This shows us when it is (not) called.

1 def loggingF(x):
2  print("call function")
3  return x
4
5
6 def example4():

After wrapping our new function into a pysgpp.MultiFunction, we create a FunctionLookupTable. This will cache the function values by their DataVector parameter and use cached values if available. Note, however, that even slightly differing DataVectors will lead to separate function evaluations.

1  loggingFunc = pysgpp.multiFunc(loggingF)
2  lookupTable = pysgpp.FunctionLookupTable(loggingFunc)
3  operation = pysgpp.CombigridOperation.createLinearLejaQuadrature(d, lookupTable.toMultiFunction())

Do a normal computation...

1  result = operation.evaluate(2)
2  print("Result computed: " + str(result))

The first (and most convenient) possibility to store the data is serializing the lookup table. The serialization is not compressed and will roughly use 60 Bytes per entry. If you have lots of data, you might consider compressing it.

1  pysgpp.writeToFile("lookupTable.log", lookupTable.serialize())

It is also possible to store which levels have been evaluated:

1  pysgpp.writeToFile("levels.log", operation.getLevelManager().getSerializedLevelStructure())

Restore the data into another lookup table. The function is still needed for new evaluations.

1  restoredLookupTable = pysgpp.FunctionLookupTable(loggingFunc)
3  operation2 = pysgpp.CombigridOperation.createLinearLejaQuadrature(d, restoredLookupTable.toMultiFunction())

A new evaluation with the same levels does not require new function evaluations:

2  result = operation2.getResult()
3  print("Result computed (2nd time): " + str(result))

Another less general way of storing the data is directly serializing the storage underlying the operation. This means that retrieval is faster, but it only works if the same grid is used again. For demonstration purposes, we use loggingFunc directly this time without a lookup table:

1  pysgpp.writeToFile("storage.log", operation.getStorage().serialize())
2  operation3 = pysgpp.CombigridOperation.createLinearLejaQuadrature(d, pysgpp.multiFunc(loggingFunc))
4  result = operation3.evaluate(2)
5  print("Result computed (3rd time): " + str(result))

# Example 5: Using different operations in each dimension

This example shows how to apply different operators in different dimensions.

1 def example5():

First, we want to configure which grid points to use in which dimension. We use Chebyshev points in the 0th dimension. To make them nested, we have to use at least $$n = 3^l$$ points at level $$l$$. This is why this method contains the prefix exp. CombiHierarchies provides some matching configurations for grid points. If you nevertheless need your own configuration or you want to know which growth strategy and ordering fit to which point distribution, look up the implementation details in CombiHierarchies, it is not difficult to implement your own configuration.

1  grids = pysgpp.AbstractPointHierarchyVector()
2  grids.push_back(pysgpp.CombiHierarchies.expChebyshev())

Our next set of grid points are Leja points with linear growth ( $$n = 1 + 3l$$). For the last dimension, we use equidistant points with boundary. These are suited for linear interpolation. To make them nested, again the slowest possible exponential growth is selected by the CombiHierarchies class.

1  grids.push_back(pysgpp.CombiHierarchies.linearLeja(3))
2  grids.push_back(pysgpp.CombiHierarchies.expUniformBoundary())

The next thing we have to configure is the linear operation that is performed in those directions. We will use polynomial interpolation in the 0th dimension, quadrature in the 1st dimension and linear interpolation in the 2nd dimension. Roughly spoken, this means that a quadrature is performed on the 1D function that is the interpolated function with two fixed parameters. But since those operators "commute", the result is invariant under the order that the operations are applied in. The CombiEvaluators class also provides analogous methods and typedefs for the multi-evaluation case.

1  evaluators = pysgpp.FloatScalarAbstractLinearEvaluatorVector()
2  evaluators.push_back(pysgpp.CombiEvaluators.polynomialInterpolation())
4  evaluators.push_back(pysgpp.CombiEvaluators.linearInterpolation())

To create a CombigridOperation object with our own configuration, we have to provide a LevelManager as well:

1  levelManager = pysgpp.WeightedRatioLevelManager()
2  operation = pysgpp.CombigridOperation(grids, evaluators, levelManager, func)

The two interpolations need a parameter $$(x, z)$$. If $$\tilde{f}$$ is the interpolated function, the operation approximates the result of $$\int_0^1 \tilde{f}(x, y, z) \,dy$$.

1  parameters = pysgpp.DataVector([0.777, 0.14159])
2  result = operation.evaluate(2, parameters)
3  print("Result: " + str(result))
4

# Example 6: Using a function operating on grids

This example shows how to apply different operators in different dimensions. In some applications, you might not want to have a callback function that is called at single points, but on a full grid. One of these applications is solving PDEs. This example provides a simple framework where a PDE solver can be included. It is also suited for other tasks. The core part is a function that computes grid values on a full grid.

1 def gf(grid):

We store the results for each grid point, encoded by a MultiIndex, in a TreeStorage

1  result = pysgpp.DoubleTreeStorage(d)

Creates an iterator that yields all multi-indices of grid points in the grid.

1  it = pysgpp.MultiIndexIterator(grid.numPoints())
2
3  while (it.isValid()):

Customize this computation for your algorithm

1  value = func(grid.getGridPoint(it.getMultiIndex()))

Store the result at the multi index encoding the grid point

1  result.set(it.getMultiIndex(), value)
2  it.moveToNext()
3
4  return result
5
6
7 def example6():

To create a CombigridOperation, we currently have to use the longer way as in example 5.

1  grids = pysgpp.AbstractPointHierarchyVector(d, pysgpp.CombiHierarchies.expUniformBoundary())
2  evaluators = pysgpp.FloatScalarAbstractLinearEvaluatorVector(d, pysgpp.CombiEvaluators.cubicSplineInterpolation())
3  levelManager = pysgpp.WeightedRatioLevelManager()

We have to specify if the function always produces the same value for the same grid points. This can make the storage smaller if the grid points are nested. In this implementation, this is true. However, it would be false in the PDE case, so we set it to false here.

1  exploitNesting = False

Now create an operation as usual and evaluate the interpolation with a test parameter.

1  operation = pysgpp.CombigridOperation(
2  grids, evaluators, levelManager, pysgpp.gridFunc(gf), exploitNesting)
3
4  parameter = pysgpp.DataVector([0.1, 0.2, 0.3])
5
6  result = operation.evaluate(4, parameter)
7
8  print("Target function value: " + str(func(parameter)))
9  print("Numerical result: " + str(result))
10

# Example 7: Polynomial interpolation on nested Clenshaw Curtis grids

The next example uses interpolation.

1 def example7(dtype="uniform", maxLevel=2):

This time, we use Clenshaw-Curtis points with exponentially growing number of points per level. This is helpful for CC points to make them nested. Nested means that the set of grid points at one level is a subset of the set of grid points at the next level. Nesting can drastically reduce the number of needed function evaluations. Using these grid points, we will do polynomial interpolation at a single point.

1  if dtype == "cc":
2  operation = pysgpp.CombigridOperation.createExpClenshawCurtisPolynomialInterpolation(2, func)
3  elif dtype == "l2leja":
4  operation = pysgpp.CombigridOperation.createExpL2LejaPolynomialInterpolation(2, func)
5  else:
6  operation = pysgpp.CombigridOperation.createExpUniformLinearInterpolation(2, func)

The level manager provides more options for combigrid evaluation, so let's get it:

1  levelManager = operation.getLevelManager()

We can add regular levels like before:

We can also fetch the used grid points and plot the grid:

1  grid = levelManager.getGridPointMatrix()
2  gridList = np.array([[grid.get(r, c) for c in range(grid.getNcols())]
3  for r in range(grid.getNrows())])
4
 def g(x, y):
evaluationPoint = pysgpp.DataVector([x, y])
result = operation.evaluate(maxLevel, evaluationPoint)
return result
fig, ax, _ = plotSG3d(g=g, contour_xy=False)
ax.scatter(gridList, gridList, np.zeros(len(gridList)),
marker='o', s=20)


ax.set_axis_off() ax.set_xlabel(r"$x$") ax.set_ylabel(r"$y$") ax.set_xticks([0, 0.5, 1]) ax.set_yticks([0, 0.5, 1]) ax.set_zticks([0, 0.5, 1]) ax.xaxis.labelpad = 13 ax.yaxis.labelpad = 13 ax.set_title(r"$f(x,y) = 16 x(1-x)y(1-y)$", fontproperties=load_font_properties()) savefig(fig, "/home/franzefn/Desktop/Mario/normal_parabola", mpl3d=True)

1  fig = plt.figure()
2  plt.plot(gridList[0, :], gridList[1, :], " ",
4  marker='o', markersize=10)
5  plt.axis('off')
6  currentAxis = plt.gca()
7  currentAxis.add_patch(Rectangle((0, 0), 1, 1, fill=None, alpha=1, linewidth=2))
8  plt.xlim(0, 1)
9  plt.ylim(0, 1)
10  if dtype == "uniform":
11  plt.title(r"Sparse Grid $\ell=%i$" % (maxLevel + 1,),
13  else:
14  plt.title(r"Sparse Grid $\ell=%i$ (stretched)" % (maxLevel + 1,),
16
17  savefig(fig, "/home/franzefn/Desktop/tmp/sparse_grid_%s" % dtype,
18  mpl3d=True)
19
20  maxLevel = 1
21  for tr in ["fg", "ct"]:

We can also fetch the used grid points and plot the grid:

1  fig, axarr = plt.subplots(maxLevel + 1, maxLevel + 1,
2  sharex=True, sharey=True, squeeze=True)
3
4  levels = []
5  for level in product(list(range(maxLevel + 1)), repeat=2):
6  levels.append(level)
7  ax = axarr[level, level]
8  ax.axis('off')
9
10  for level in levels:
11  print(( tr, level ))
12  if tr == "ct" and np.sum(level) > maxLevel:
13  print( "skip %s" % (level,) )
14  continue
15
16  ax = axarr[level, level]
17  if level == 0:
18  xs = np.array([gridList[0, 1]])
19  else:
20  xs = gridList[0, :]
21
22  if level == 0:
23  ys = np.array([gridList[1, 1]])
24  else:
25  ys = gridList[1, :]
26
27  xv, yv = np.meshgrid(xs, ys, sparse=False, indexing='xy')
28
29  for i in range(len(xs)):
30  for j in range(len(ys)):
31  ax.plot(yv[j, i], xv[j, i], color=load_color(0),
32  marker="o", markersize=10)
33  ax.set_title(r"$(%i, %i)$" % (level + 1, level + 1),
35  ax.add_patch(Rectangle((0, 0), 1, 1, fill=None, alpha=1, linewidth=1))

plt.xlim(0, 1) plt.ylim(0, 1)

1  fig.set_size_inches(6, 6, forward=True)
2  savefig(fig, "/home/franzefn/Desktop/tmp/tableau_%s_%s_l%i" % (dtype, tr, maxLevel, ),
3  mpl3d=True)
4

# Example 8: UQ setting with variance refinement

This example shows how to use the variance refinement method that uses the PCE transformation for variance computation on each subspace.

1 def example8(dist_type="uniform"):
2  operation = pysgpp.CombigridOperation.createExpClenshawCurtisPolynomialInterpolation(d, func)
3
4  config = pysgpp.OrthogonalPolynomialBasis1DConfiguration()
5
6  if dist_type == "beta":
7  config.polyParameters.type_ = pysgpp.OrthogonalPolynomialBasisType_JACOBI
8  config.polyParameters.alpha_ = 5
9  config.polyParameters.alpha_ = 4
10
11  U = J([Beta(config.polyParameters.alpha_,
12  config.polyParameters.beta_)] * d)
13  else:
14  config.polyParameters.type_ = pysgpp.OrthogonalPolynomialBasisType_LEGENDRE
15  U = J([Uniform(0, 1)] * d)
16
17  basisFunction = pysgpp.OrthogonalPolynomialBasis1D(config)
18  basisFunctions = pysgpp.OrthogonalPolynomialBasis1DVector(d, basisFunction)
19
20  q = 3
22  print( "Total function evaluations: %i" % operation.numGridPoints() )

compute variance of the interpolant

1  surrogateConfig = pysgpp.CombigridSurrogateModelConfiguration()
2  surrogateConfig.type = pysgpp.CombigridSurrogateModelsType_POLYNOMIAL_CHAOS_EXPANSION
4  surrogateConfig.basisFunction = basisFunction
5  pce = pysgpp.createCombigridSurrogateModel(surrogateConfig)
6
7  n = 10000
8  values = [g(pysgpp.DataVector(xi)) for xi in U.rvs(n)]
9  print( "E(u) = %g ~ %g" % (np.mean(values),
10  pce.mean()))
11  print( "Var(u) = %g ~ %g" % (np.var(values),
12  pce.variance()))
13

Call the examples

1 #print("Example 1:")

example1()

1 #print("\nExample 2:")

example2()

1 #print("\nExample 3:")

example3()

1 #print("\nExample 4:")

example4()

1 #print("\nExample 5:")

example5()

1 #print("\nExample 6:")

example6()

1 print("\nExample 7:")
2 example7(dtype="cc")
3 example7(dtype="uniform")
4 example7(dtype="l2leja")
5

print("\nExample 8:") example8(dist_type="beta") print( "-" * 80 ) example8(dist_type="uniform")