A new module has been added to matfaust version 3.1.x. Its name is poly and as expected it is dedicated to a kind of Fausts that are defined according to series of polynomials.

In this notebook we'll see how to use the main functions of this module then we'll introduce one precise use case with the action of exponential matrix on a vector / matrix.

NOTE: all the functions introduced in this notebook are available on GPU, using the 'dev', 'gpu' arguments.

Firstly, the poly module allows to define a polynomial basis (aka FaustPoly) using the function matfaust.poly.basis. Currently, only Chebyshev polynomials are supported but other are yet to come. Below is the prototype of the function:

basis(L, K, basis_name, varargin)

NOTE: it is noteworthy that varargin contains the optional T0 argument, we'll come back to this point later.

In the next, we shall see a simple example but I let you consult the documentation by typing help matfaust.poly.basis to get more details (alternatively, this is the online doc).

For instance, if you want to instantiate a basis Faust of dimension K+1 (in the example below K=5) on L, which by the way must be a sparse matrix at least square (and most likely symmetric positive definite in much use cases), you'll make this call to the function:

import matfaust.poly.basis

d = 128;

L = sprand(d, d, .2);

L = L*L';

K = 5;

F = basis(L, K, 'chebyshev')

As you can see, the last factor is followed by the mention identity matrix flag. It means that this factor is the identity matrix. This is not suprising, because the 0-degree Chebyshev polynomial is the identity. However, note that the T0 optional argument of the function is here to trick the basis by using another matrix than the identity even if eventually it might not be a proper basis it can be useful if you want to apply this basis on a vector or a matrix (hence you'll set T0 as this vector/matrix instead of multiplying the basis by this vector/matrix).

So how should we understand this Faust? You can see it as a vertical concatenation of polynomials. Indeed, the basis is composed of K+1 polynomials, the 0-degree polynomial is at the top of the stack (i.e. F(1:d,:) is the identity):

full(F(1:d, :))

This first 0-degree polynomial is followed by the next degree polynomials: hence F(d+1:d*2, :) is the 1-degree polynomial, F(d*2+1:d*3, :) is the 2-degree polynomial and so on...

For details about the Chebyshev polynomials, including their definition by a recurrence relationship (that is used here behind the scene), you can look at this wikipedia article.

One of the most important thing to note about a polynomial basis Faust is that the multiplication by a vector or a matrix is specialized in order to optimize the performance obtained compared to the generic Faust-vector/matrix multiplication. Indeed, due to the particular structure of the polynomial basis Faust, the multiplication can be optimized.

Let's verify that is true! In the code below F is cloned to a classic Faust G and the time of the multiplication by a matrix is measured in the both cases (with F, the basis Faust and G its classic Faust copy). Note that Faust.clone function is not used because in this case it would return a polynomial basis Faust (after all that's the role of clone to preserve the Faust properties!). To get a classic Faust the only way is to copy the Faust factor by factor.

facs = {};

for i=1:numfactors(F)

facs = [facs {factors(F,i)}];

end

G = matfaust.Faust(facs);

X = rand(size(F, 2), 100);

timeit(@() F*X)

ans = 0.0132

timeit(@() G*X)

ans = 0.0221

Now let's verify the multiplication result is accurate:

err = norm(F*X-G*X)/norm(F*X)

err = 2.7722e-17

As you see it's alright.

The second function of the matfaust.poly module is poly. This function purpose is to compute a linear combination of polynomials composing a polynomial basis / FaustPoly (it can also be viewed as a way to compute a polynomial). So passing the FaustPoly F and the linear combination coefficients (one per polynomial, in ascending degree order) you'll obtain:

import matfaust.poly.poly

coeffs = rand(K+1, 1)*100;

lc_F = poly(coeffs, F)

To be explicit about lc_F let's show how to rebuild it manually using G (which again is a classic Faust equal to F).

lc_G = coeffs(1)*G(1:d,:)

for i=2:K+1

lc_G = lc_G+coeffs(i)*G(d*(i-1)+1:d*i, :);

end

error = norm(full(lc_F)-full(lc_G))/norm(full(lc_G))

error = 1.0137e-16

Here again the basis FaustPoly operation is optimized compared to the Faust one. Speaking of which, there is ways to do even more optimized because the poly function is kind of matrix type agnostic, or precisely, it accepts a FaustPoly or a full array as the basis argument. Doing with the latter an optimized implementation is used whose the memory footprint is smaller than the one consumed with a FaustPoly. It can be particulary efficient when the use cases (as we'll see in 3.) consist to apply a linear combination of F to a vector x as it's shown below.

x = rand(size(F, 2));

way1 = @() poly(coeffs, F)*x; % first way to do as we've done above

way2 = @() poly(coeffs, F*x); % second way to do (quicker than way1)

way3 = @() poly(coeffs, F, 'X', x); % third way to do (quicker than way1 too)

timeit(way1)

ans = 0.0337

timeit(way2)

ans = 0.0130

timeit(way3)

ans = 0.0255

Depending on the situation the way2 or way3 might be the quickest, but they are always both quicker than way1.

Just in case let's verify all ways give the same results.

err_way2 = norm(poly(coeffs,F)*x-poly(coeffs,F*x))/norm(poly(coeffs, F)*x)

err_way2 = 2.6314e-17

err_way3 = norm(poly(coeffs, F)*x-poly(coeffs, F, 'X', x))/norm(poly(coeffs, F)*x)

err_way3 = 2.6314e-17

All sounds good! We shall now introduce one use case of Chebyshev polynomial series in FAµST that allow to get interesting results. But a last thing to note before going ahead to the part 3. is that the function poly is a little more complicated that it looks like, for more details I invite you to consult the API documentation.

In the next, we'll see how to compute the action of the exponential matrix on a vector x. This function has been originally developed in pyfaust inspired by the scipy.sparse.linalg.expm_multiply function. To get a computation time comparison, please consult the pyfaust jupyter notebook.

Recalling that it consists to compute exp(tA)x without computing directly the exponential let's use the function.

import matfaust.poly.expm_multiply

S = sprand(1024, 1024, .002);

A = S*S'; % A must be symmetric positive definite

X = rand(size(A, 2), 1);

pts = linspace(-.5, -.2, 1000); % all t must be negative

y = expm_multiply(A, X, pts)

For more details about the use and the limitations of expm_multiply please see the API documentation.

This live script has been executed using the following version of matfaust:

matfaust.version()