SG++-Doxygen-Documentation
gridConverter.py

This tutorial contains examples on how to convert sparse grids with a hierarchical basis to a sparse grid defined on the combination of anisotropic full grids (combination technique).

It just includes grids without boundary points. We distinguish between methods that convert (1) the anisotropic full grids to the hierarchical grids and (2) vice vera:

  • Converting the levels from the combination technique to the hierarchical version is always possible -> convertCombigridToHierarchicalSparseGrid
  • For spatially adaptive sparse grids it is possible that there exist just partially filled levels. Partially filled levels are not allowed in the combination technique. We, therefore, distinguish two cases where we
    • Add all levels where there exists at least one grid point in the hierarchical version -> convertHierarchicalSparseGridToCombigrid with conversion type GridConversionTypes_ALLSUBSPACES
    • Add just those levels where exist all the grid points in the hierarchical version -> convertHierarchicalSparseGridToCombigrid with conversion type GridConversionTypes_COMPLETESUBSPACES

First, we import a the methods/classes we need for this example...

1 import numpy as np
2 from argparse import ArgumentParser
3 import matplotlib.pyplot as plt
4 
5 from pysgpp import CombigridOperation, multiFunc, DataVector, \
6  CombigridMultiOperation, DataMatrix, SurplusRefinementFunctor, \
7  Grid, convertCombigridToHierarchicalSparseGrid, convertHierarchicalSparseGridToCombigrid, \
8  GridConversionTypes_ALLSUBSPACES, GridConversionTypes_COMPLETESUBSPACES, \
9  createOperationHierarchisation, createOperationMultipleEval

... and define we define the function we want to interpolate. It is a parbola, which is zero for any x_i=0 and x_i=1 and evaluates to 1 for x = (0.5, .., 0.5)^T

1 def f(x):
2  return np.prod([4 * xi * (1 - xi) for xi in x.array()])

Helper functions

We first define a few functions that remove boiler plate code from the actual examples.

1 def interpolate(grid, f):
2  """
3  This helper functions cmoputes the coefficients of a sparse grid
4  function for a given function
5 
6  Arguments:
7  grid -- Grid sparse grid from pysgpp
8  f -- function to be interpolated
9 
10  Return DataVector coefficients of the sparse grid function
11  """
12  gs = grid.getStorage()
13  alpha = DataVector(gs.getSize())
14  p = DataVector(gs.getDimension())
15  for i in range(gs.getSize()):
16  gs.getCoordinates(gs.getPoint(i), p)
17  alpha[i] = f(p)
19  return alpha
20 
21 def refineGrid(grid, alpha, f, refnums):
22  """
23  This function refines a sparse grid function refnum times.
24 
25  Arguments:
26  grid -- Grid sparse grid from pysgpp
27  alpha -- DataVector coefficient vector
28  f -- function to be interpolated
29  refnums -- int number of refinement steps
30 
31  Return nothing
32  """
33  gs = grid.getStorage()
34  gridGen = grid.getGenerator()
35  x = DataVector(gs.getDimension())
36  for _ in range(refnums):
37  # refine a single grid point each time
38  gridGen.refine(SurplusRefinementFunctor(alpha, 1))
39 
40  # extend alpha vector (new entries uninitialized)
41  alpha.resizeZero(gs.getSize())
42 
43  # set function values in alpha
44  for i in range(gs.getSize()):
45  gs.getCoordinates(gs.getPoint(i), x)
46  alpha[i] = f(x)
47 
48  # hierarchize

Regular sparse grids to regular combination technique and back

In this example we define a regular sparse grid function with a piecewise $d$-linear hat basis that interpolates the normal parabola. Then, we transform the sparse grid to the corresponding anisotropic grids in the combination technique and vice versa. We evaluate the resulting surrogates at numSamples randomly chosen samples on the unit hypercube and make sure that all of the surrogates we obtain by conversion are equal.

1 def regularGridToRegularGrid(numDims,
2  level,
3  f,
4  numSamples=1000,
5  plot=False,
6  verbose=False):
7  """
8  Converts a regular sparse grid function to a sparse grid in the
9  combination technique and back.
10 
11  Arguments:
12  numDims -- int number of dimensions
13  level -- level of the sparse grid
14  f -- function to be interpolated
15  numSamples -- int number of random samples on which we evaluate the different sparse grid
16  functions to validate the grid conversion
17  plot -- bool whether the sparse grid functions are plotted or not (just for numDims=1)
18  verbose -- bool verbosity
19  """

We generate a iid of uniform samples, which we are going to use to validate the grid conversion

1  x = np.random.rand(numSamples, numDims)
2  parameters = DataMatrix(x)

We create a regular sparse grid as usual.

1  grid = Grid.createLinearGrid(numDims)
2  grid.getGenerator().regular(level)
3  alpha = interpolate(grid, f)

We apply now both methods of the grid conversion.

1  treeStorage_all = convertHierarchicalSparseGridToCombigrid(grid.getStorage(),
2  GridConversionTypes_ALLSUBSPACES)
3  treeStorage_complete = convertHierarchicalSparseGridToCombigrid(grid.getStorage(),
4  GridConversionTypes_COMPLETESUBSPACES)

Note, that we do the conversion just based on the grid points. With this approach you can easily use different basis function types on the same grid. We initialize the CombigridOperation on a grid that spans the same function space as the original hierarchical sparse grid: hat basis on an equidistant grids without boundary points.

1  func = multiFunc(f)
2  opt_all = CombigridMultiOperation.createExpUniformLinearInterpolation(numDims, func)
3  opt_complete = CombigridMultiOperation.createExpUniformLinearInterpolation(numDims, func)

The CombigridOperation expects the points at which you want to evaluate the interpolant as DataMatrix with the shape (numDims x numSamples). We, therefore, need to transpose the samples and initialize the multi operation with them. To set the level structure we initialize the level manager of the operation with the storage we have obtained after the conversion.

1  parameters.transpose()
2  opt_all.setParameters(parameters)
3  opt_all.getLevelManager().addLevelsFromStructure(treeStorage_all)
4  opt_complete.setParameters(parameters)
5  opt_complete.getLevelManager().addLevelsFromStructure(treeStorage_complete)
6  parameters.transpose()

If you want you can examine the levels of the combination technique...

1  if verbose:
2  print("-" * 80)
3  print("just full levels:")
4  print(opt_complete.getLevelManager().getSerializedLevelStructure())
5  print("-" * 80)
6  print("all levels:")
7  print(opt_all.getLevelManager().getSerializedLevelStructure())
8  print("-" * 80)

We start to transform the grids from the combination technique back to their hierarchical formulation. We, again, create a grid with a piecewise $d$-linear basis and initialize the grid points in its storage by the ones available in the levels of the combination technique. We do it first for the combination grids that just contain just those levels where the original sparse grid had complete subpsaces...

1  grid_complete = Grid.createLinearGrid(numDims)
2  treeStorage_complete = opt_complete.getLevelManager().getLevelStructure()
3  convertCombigridToHierarchicalSparseGrid(treeStorage_complete, grid_complete.getStorage())

... and do the same for the version where we considered all subspaces where at least one grid point was located.

1  grid_all = Grid.createLinearGrid(numDims)
2  treeStorage_all = opt_all.getLevelManager().getLevelStructure()
3  convertCombigridToHierarchicalSparseGrid(treeStorage_all, grid_all.getStorage())

we interpolate now f on the new grids and...

1  alpha_complete = interpolate(grid_complete, f)
2  alpha_all = interpolate(grid_all, f)

... evaluate all the surrogate functions we have so far

1  y_sg_regular = DataVector(numSamples)
2  createOperationMultipleEval(grid, parameters).eval(alpha, y_sg_regular)
3 
4  y_sg_all = DataVector(numSamples)
5  createOperationMultipleEval(grid_all, parameters).eval(alpha_all, y_sg_all)
6 
7  y_sg_complete = DataVector(numSamples)
8  createOperationMultipleEval(grid_complete, parameters).eval(alpha_complete, y_sg_complete)
9 
10  y_ct_all = opt_all.getResult()
11  y_ct_complete = opt_complete.getResult()

For convenience we use flattened numpy arrays to test if the function values at each point are the same.

1  y_sg_regular = y_sg_regular.array().flatten()
2  y_ct_all = y_ct_all.array().flatten()
3  y_ct_complete = y_ct_complete.array().flatten()
4  y_sg_all = y_sg_all.array().flatten()
5  y_sg_complete = y_sg_complete.array().flatten()

If you want, you can plot the results if the problem is one dimensional

1  if plot and numDims == 1:
2  x = x.flatten()
3  ixs = np.argsort(x)
4  plt.figure()
5  plt.plot(x[ixs], y_sg_regular[ixs], label="sg regular")
6  plt.plot(x[ixs], y_sg_all[ixs], label="sg all")
7  plt.plot(x[ixs], y_ct_complete[ixs], label="ct full")
8  plt.plot(x[ixs], y_ct_all[ixs], label="ct all")
9  plt.legend()
10  plt.show()

All the function values should be equivalent and...

1  assert np.sum((y_ct_complete - y_ct_all) ** 2) < 1e-14
2  assert np.sum((y_ct_complete - y_sg_regular) ** 2) < 1e-14
3  assert np.sum((y_sg_regular - y_sg_all) ** 2) < 1e-14
4  assert np.sum((y_sg_regular - y_sg_complete) ** 2) < 1e-14

... the grid sizes should as well

1  assert grid_complete.getSize() == grid.getSize()
2  assert grid_all.getSize() == grid.getSize()

Spatially adaptive sparse grids to regular combination technique and back

In this example we define a spatially adaptive sparse grid function with a piecewise d-linear hat basis that interpolates the normal parabola. Then, we transform the sparse grid to the corresponding anisotropic grids in the combination technique and vice versa. We evaluate the resulting surrogates at numSamples randomly chosen samples on the unit hypercube. This time, the results differ depending on which conversion algorithm we use and on the completeness of the subpsaces.

1 def adaptiveGridToRegularGrid(numDims,
2  level,
3  refnums,
4  f,
5  numSamples=1000,
6  plot=False,
7  verbose=False):
8  """
9  Converts a regular sparse grid function to a sparse grid in the
10  combination technique and back.
11 
12  Arguments:
13  numDims -- int number of dimensions
14  level -- level of the sparse grid
15  refnums -- int number of refinement steps
16  f -- function to be interpolated
17  numSamples -- int number of random samples on which we evaluate the different sparse grid
18  functions to validate the grid conversion
19  plot -- bool whether the sparse grid functions are plotted or not (just for numDims=1)
20  verbose -- bool verbosity
21  """

We generate a iid of uniform samples, which we are going to use to validate the grid conversion

1  x = np.random.rand(numSamples, numDims)
2  parameters = DataMatrix(x)

We create a regular sparse grid as usual and...

1  grid = Grid.createLinearGrid(numDims)
2  grid.getGenerator().regular(level)
3  alpha = interpolate(grid, f)

... refine it adaptively

1  grid_adaptive = grid.clone()
2  alpha_adaptive = DataVector(alpha)
3  refineGrid(grid_adaptive, alpha_adaptive, f, refnums)

We apply now both methods of the grid conversion on the adaptively refined grid. The first conversion considers all levels where at least one sparse grid point exists, while the second one considers just complete subspaces.

1  treeStorage_all = convertHierarchicalSparseGridToCombigrid(grid_adaptive.getStorage(),
2  GridConversionTypes_ALLSUBSPACES)
3  treeStorage_complete = convertHierarchicalSparseGridToCombigrid(grid_adaptive.getStorage(),
4  GridConversionTypes_COMPLETESUBSPACES)

We initialize the CombigridOperation on a grid that spans the same function space as the original hierarchical sparse grid: hat basis on an equidistant grids without boundary points.

1  func = multiFunc(f)
2  opt_all = CombigridMultiOperation.createExpUniformLinearInterpolation(numDims, func)
3  opt_complete = CombigridMultiOperation.createExpUniformLinearInterpolation(numDims, func)

The CombigridOperation expects the points at which you want to evaluate the interpolant as DataMatrix with the shape (numDims x numSamples). We, therefore, need to transpose the samples and initialize the multi operation with them. To set the level structure we initialize the level manager of the operation with the storage we have obtained after the conversion.

1  parameters.transpose()
2  opt_all.setParameters(parameters)
3  opt_all.getLevelManager().addLevelsFromStructure(treeStorage_all)
4  opt_complete.setParameters(parameters)
5  opt_complete.getLevelManager().addLevelsFromStructure(treeStorage_complete)
6  parameters.transpose()

If you want you can examine the levels of the combination technique...

1  if verbose:
2  print("-" * 80)
3  print("just full levels:")
4  print(opt_complete.getLevelManager().getSerializedLevelStructure())
5  print("-" * 80)
6  print("all levels:")
7  print(opt_all.getLevelManager().getSerializedLevelStructure())
8  print("-" * 80)

We start to transform the grids from the combination technique back to their hierarchical formulation. We, again, create a grid with a piecewise d-linear basis and initialize the grid points in its storage by the ones available in the levels of the combination technique. We do it first for the combination grids that just contain just those levels where the original sparse grid had complete subpsaces...

1  grid_complete = Grid.createLinearGrid(numDims)
2  treeStorage_complete = opt_complete.getLevelManager().getLevelStructure()
3  convertCombigridToHierarchicalSparseGrid(treeStorage_complete, grid_complete.getStorage())

... and do the same for the version where we considered all subspaces where at least one grid point was located.

1  grid_all = Grid.createLinearGrid(numDims)
2  treeStorage_all = opt_all.getLevelManager().getLevelStructure()
3  convertCombigridToHierarchicalSparseGrid(treeStorage_all, grid_all.getStorage())

we interpolate now f on the new grids and...

1  alpha_complete = interpolate(grid_complete, f)
2  alpha_all = interpolate(grid_all, f)

... evaluate all the surrogate functions we have so far

1  y_sg_regular = DataVector(numSamples)
2  createOperationMultipleEval(grid, parameters).eval(alpha, y_sg_regular)
3 
4  y_sg_adaptive = DataVector(numSamples)
5  createOperationMultipleEval(grid_adaptive, parameters).eval(alpha_adaptive, y_sg_adaptive)
6 
7  y_sg_all = DataVector(numSamples)
8  createOperationMultipleEval(grid_all, parameters).eval(alpha_all, y_sg_all)
9 
10  y_sg_complete = DataVector(numSamples)
11  createOperationMultipleEval(grid_complete, parameters).eval(alpha_complete, y_sg_complete)
12 
13  y_ct_all = opt_all.getResult()
14  y_ct_complete = opt_complete.getResult()

For convenience we use flattened numpy arrays to test if the function values at each point are the same.

1  y_sg_regular = y_sg_regular.array().flatten()
2  y_sg_adaptive = y_sg_adaptive.array().flatten()
3  y_ct_all = y_ct_all.array().flatten()
4  y_ct_complete = y_ct_complete.array().flatten()
5  y_sg_all = y_sg_all.array().flatten()
6  y_sg_complete = y_sg_complete.array().flatten()

If you want, you can plot the results if the problem is one dimensional

1  if plot and numDims == 1:
2  x = x.flatten()
3  ixs = np.argsort(x)
4  plt.figure()
5  plt.plot(x[ixs], y_sg_regular[ixs], label="sg regular")
6  plt.plot(x[ixs], y_sg_adaptive[ixs], label="sg adaptive")
7  plt.plot(x[ixs], y_ct_complete[ixs], label="ct full")
8  plt.plot(x[ixs], y_ct_all[ixs], label="ct all")
9  plt.plot(x[ixs], y_sg_complete[ixs], label="sg full")
10  plt.plot(x[ixs], y_sg_all[ixs], label="sg all")
11  plt.legend()
12  plt.show()

All the function values should not be equivalent if...

1  if grid_complete.getSize() < grid_all.getSize():
2  assert np.sum((y_ct_complete - y_ct_all) ** 2) > 1e-14
3  assert np.sum((y_sg_regular - y_sg_all) ** 2) > 1e-14

and should be equivalent if...

1  if grid_complete.getSize() == grid.getSize():
2  assert np.sum((y_ct_complete - y_sg_regular) ** 2) < 1e-14
3  assert np.sum((y_sg_complete - y_sg_regular) ** 2) < 1e-14

For the grid sizes it must hold that

1  assert grid_adaptive.getSize() > grid.getSize()
2  assert grid_complete.getSize() <= grid_adaptive.getSize()
3  assert grid_all.getSize() >= grid.getSize()

How to start the examples

You simply specify the parameters via the command line and run it

1 if __name__ == '__main__':
2  parser = ArgumentParser(description='Get a program and run it with input')
3  parser.add_argument('--version', action='version', version='%(prog)s 1.0')
4  parser.add_argument('--numDims', default=4, type=int,
5  help='number of dimensions')
6  parser.add_argument('--level', default=4, type=int,
7  help='sparse grid level')
8  parser.add_argument('--refnums', default=0, type=int,
9  help='number of refinement steps')
10  parser.add_argument('--plot', default=False, action='store_true',
11  help='plot stuff')
12  parser.add_argument('--verbose', default=False, action='store_true',
13  help='verbosity')
14  args = parser.parse_args()
15  # select the right conversion method based on the input parameters
16  if args.refnums == 0:
17  regularGridToRegularGrid(args.numDims,
18  args.level,
19  f,
20  plot=args.plot,
21  verbose=args.verbose)
22  else:
23  adaptiveGridToRegularGrid(args.numDims,
24  args.level,
25  args.refnums,
26  f,
27  plot=args.plot,
28  verbose=args.verbose)