Quantcast
Viewing latest article 6
Browse Latest Browse All 16

Thank you much for the

Thank you much for the addtional details. While your request about support of 2D case is valid for the further analysis, let me provide additional details about 1D case.

As you mention, performance advantages of Intel(R) MKL Data Fitting routines are observed in the vector case when number of interpolation sites is, say, at least several hundreeds.

We are also aware about importance of scalar case for applications, e.g., when number of interpolation sites is 1 and number of functions is 1.
For those reasons in Intel(R) MKL 11.0. Update 2 we improved performance of interpolation routines for a scalar case, in particular speed of search and polynomial computations. As the interface overheads related to testing of function parameters are visible on small problem dimensions, we also added support of parameter DF_CHECK_FLAG in editor dfiEditVal. When disabled with the editor this flag helps to avoid extra parameter checks; by default this flag is enabled. However, please use this flag carefully, after you complete development/debuging of your code. See additional details in Release Notes http://software.intel.com/en-us/articles/intel-mkl-110-release-notes/ and Data Fitting Chapter of Intel(R) MKL Manual. But still, in the scalar case interface overheads may be visible.

Did you try the latest Intel(R) MKL 11.0 to test performance of the interpolation functions in the scalar case? If yes, did it help you in the scalar case?

API of Data Fitting component supports different use cases, including parallelization on level of the user's application.
As your application uses 900 independent splines I wounder if you apply parallelization in your interpolation code?

Something like this:

double breaks[900*11];
double sites[900];

#pragma omp parallel for
for ( i = 0; i < 900; i++ )
{
     DFTaskPtr task;

     status = dfdNewTask1D( &task, 11, &breaks[i*11], xhint, ny, y, yhint );
     status = dfdEditPPSpline1D(...);
     status = dfiEditVal(task, DF_CHECK_FLAG, DF_DISABLE_CHECK_FLAG );
     status = dfdConstruct1D( task, DF_PP_SPLINE, DF_METHOD_STD );
     status = dfdInterpolate1D( task, DF_INTERP, DF_METHOD_PP,1,&sites[i], ...);
     status = dfDeleteTask( &task ); 
}

If this loop is done inside of the other loop, the construction (and de-struction) of the Data Fitting task can be done once, outside of the inner loop. In the inner loop you either provide pointers to the new input parameters (e.g., breaks or/and function values) using relevant Data Fitting editors or copy the new values of input parameters into the same memory as before.

Please, let me know if this helps.
Andrey


Viewing latest article 6
Browse Latest Browse All 16

Trending Articles