SG++-Doxygen-Documentation

We compute the sparse grid interpolant of the function $$f(x) = \sin(\pi x).$$ We perform spatially-dimension-adaptive refinement of the sparse grid model, which means we refine a particular grid point (locality) only in some dimensions (dimensionality).

For details on spatially-dimension-adaptive refinement see

  V. Khakhutskyy and M. Hegland: Spatially-Dimension-Adaptive Sparse Grids for Online Learning.
Pflüger and J. Garcke (ed.), Sparse Grids and Applications - Stuttgart 2014, Volume 109 of LNCSE, p. 133–162. Springer International Publishing, March 2016.

The example can be found in the file predictiveRefinement.py.

1 # import modules
2 import sys
3 import math
4
5 from pysgpp import *
6 import matplotlib.pyplot as plotter
7 from mpl_toolkits.mplot3d import Axes3D
8
9

Spatially-dimension-adaptive refinement uses squared prediction error on a dataset to compute refinement indicators. Hence, here we define a function to compute these squared errors.

1 def calculateError(dataSet,f,grid,alpha,error):
2  print("calculating error")
3  #traverse dataSet
4  vec = DataVector(2)
5  opEval = createOperationEval(grid)
6  for i in range(dataSet.getNrows()):
7  dataSet.getRow(i,vec)
8  error[i] = pow(f(dataSet.get(i,0),dataSet.get(i,1))-opEval.eval(alpha,vec),2)
9  return error
10

We define the function $$f(x) = \sin(\pi x).$$ to interpolate.

1 f = lambda x0, x1: math.sin(x0*math.pi)
2

Create a two-dimensional piecewise bi-linear grid

1 dim = 2
2 grid = Grid.createModLinearGrid(dim)
3 HashGridStorage = grid.getStorage()
4 print("dimensionality: {}".format(dim))
5
6 # create regular grid, level 3
7 level = 1
8 gridGen = grid.getGenerator()
9 gridGen.regular(level)
10 print("number of initial grid points: {}".format(HashGridStorage.getSize()))

To create a dataset we use points on a regular 2d grid with a step size of 1 / rows and 1 / cols.

1 rows = 100
2 cols = 100
3
4 dataSet = DataMatrix(rows*cols,dim)
5 vals = DataVector(rows*cols)
6
7 for i in range(rows):
8  for j in range(cols):
9  #xcoord
10  dataSet.set(i*cols+j,0,i*1.0/rows)
11  #ycoord
12  dataSet.set(i*cols+j,1,j*1.0/cols)
13  vals[i*cols+j] = f(i*1.0/rows,j*1.0/cols)
14

We refine adaptively 20 times. In every step we recompute the vector of surpluses alpha, the vector with squared errors on the dataset errorVector, and then call the refinement routines.

1 # create coefficient vectors
2 alpha = DataVector(HashGridStorage.getSize())
3 print("length of alpha vector: {}".format(alpha.getSize()))
4
5 # now refine adaptively 20 times
6 for refnum in range(20):
7
8

Step 1: calculate the surplus vector alpha. In data mining with do it by solving a regression problem as shown in example Classification Example. Here, the function can be evaluated at any point. Hence. we simply evaluate it at the coordinates of the grid points to obtain the nodal values. Then we use hierarchization to obtain the surplus value.

1  for i in range(HashGridStorage.getSize()):
2  gp = HashGridStorage.getPoint(i)
3  alpha[i] = f(gp.getStandardCoordinate(0), gp.getStandardCoordinate(1))
4
5  # hierarchize
7
8

Step 2: calculate squared errors.

1  errorVector = DataVector(dataSet.getNrows())
2  calculateError(dataSet, f, grid, alpha, errorVector)
3
4

Step 3: call refinement routines. PredictiveRefinement implements the decorator pattern and extends the functionality of HashRefinement. PredictiveRefinement requires a special kind of refinement functor – PredictiveRefinementIndicator that can access the dataset and the error vector. The refinement itself if performed by calling .free_refine() same for normal refinement in HashRefinement.

1  #refinement stuff
2  refinement = HashRefinement()
3  decorator = PredictiveRefinement(refinement)
4  # refine a single grid point each time
5  print("Error over all = %s" % errorVector.sum())
6  indicator = PredictiveRefinementIndicator(grid,dataSet,errorVector,1)
7  decorator.free_refine(HashGridStorage,indicator)
8
9  print("Refinement step %d, new grid size: %d" % (refnum+1, HashGridStorage.getSize()))
10
11
12  # extend alpha vector (new entries uninitialized)
13  alpha.resizeZero(HashGridStorage.getSize())
14

The output of the program should look like this

 dimensionality:                   2
number of initial grid points:    1
length of alpha vector:           1
calculating error
Error over all = 2268.65176743
Refinement step 1, new grid size: 3
calculating error
Error over all = 264.089889373
Refinement step 2, new grid size: 5
calculating error
Error over all = 125.377807448
Refinement step 3, new grid size: 7
calculating error
Error over all = 3.48358931549
Refinement step 4, new grid size: 9
calculating error
Error over all = 1.99756786008
Refinement step 5, new grid size: 11
calculating error
Error over all = 0.845349319849
Refinement step 6, new grid size: 13
calculating error
Error over all = 0.464096272026
Refinement step 7, new grid size: 15
calculating error
Error over all = 0.0828432242032
Refinement step 8, new grid size: 17
calculating error
Error over all = 0.0828432242032
Refinement step 9, new grid size: 19
calculating error
Error over all = 0.0689760107187
Refinement step 10, new grid size: 21
calculating error
Error over all = 0.0551671985084
Refinement step 11, new grid size: 23
calculating error
Error over all = 0.0413583862982
Refinement step 12, new grid size: 25
calculating error
Error over all = 0.0330228853
Refinement step 13, new grid size: 27
calculating error
Error over all = 0.0230577647698
Refinement step 14, new grid size: 29
calculating error
Error over all = 0.0130926442396
Refinement step 15, new grid size: 31
calculating error
Error over all = 0.00856834486343
Refinement step 16, new grid size: 33
calculating error
Error over all = 0.00404404548722
Refinement step 17, new grid size: 35
calculating error
Error over all = 0.00404404548722
Refinement step 18, new grid size: 37
calculating error
Error over all = 0.00404404548722
Refinement step 19, new grid size: 41
calculating error
Error over all = 0.00404404548722
Refinement step 20, new grid size: 45