A new module has been added to pyfaust version 3.1.x. Its name is `poly`

and as expected it is dedicated to a kind of Fausts that are defined according to series of polynomials.

In this notebook we'll see how to use the main functions of this module then we'll introduce one precise use case with the action of exponential matrix on a vector / matrix.

**NOTE**: all the functions introduced in this notebook are available on GPU, using the `dev='gpu'`

argument.

Firstly, the `poly`

module allows to define a polynomial basis (a `FaustPoly`

) using the function `pyfaust.poly.basis`

. Currently, only Chebyshev polynomials are supported but other are yet to come. Below is the prototype of the function:

`basis(L, K, basis_name, dev='cpu', T0=None)`

In the next, we shall see a simple example but I let you consult the documentation by typing `help(pyfaust.poly.basis)`

to get more details.

For instance, if you want to instantiate a basis Faust of dimension K+1 (in the example below K=5) on L, which by the way must be a `scipy.sparse.csr_matrix`

at least square (and most likely symmetric positive definite in much use cases), you'll make this call to the function:

In [1]:

```
from pyfaust.poly import basis
from scipy.sparse import random
d = 128
L = random(d, d, .2, format='csr')
L = L@L.T
K = 5
F = basis(L, K=K, basis_name='chebyshev')
F
```

Out[1]:

Faust size 768x128, density 0.855713, nnz_sum 84120, 6 factor(s): - FACTOR 0 (real) SPARSE, size 768x640, density 0.0347493, nnz 17080 - FACTOR 1 (real) SPARSE, size 640x512, density 0.0517334, nnz 16952 - FACTOR 2 (real) SPARSE, size 512x384, density 0.0855713, nnz 16824 - FACTOR 3 (real) SPARSE, size 384x256, density 0.16984, nnz 16696 - FACTOR 4 (real) SPARSE, size 256x128, density 0.501709, nnz 16440 - FACTOR 5 (real) SPARSE, size 128x128, density 0.0078125, nnz 128 identity matrix flag

As you can see, the last factor is followed by the mention `identity matrix flag`

. It means that this factor is the identity matrix. This is not suprising, because the 0-degree Chebyshev polynomial is the identity. However, note that the `T0`

optional argument of the function is here to trick the basis by using another matrix than the identity even if eventually it might not be a proper basis it can be useful if you want to apply this basis on a vector or a matrix (hence you'll set `T0`

as this vector/matrix instead of multiplying the basis by this vector/matrix).

So how should we understand this Faust? You can see it as a vertical concatenation of polynomials. Indeed, the basis is composed of K+1 polynomials, the 0-degree polynomial is at the top of the stack (i.e. `F[:d,:]`

is the identity):

In [2]:

```
F[:d,:].toarray()
```

Out[2]:

array([[1., 0., 0., ..., 0., 0., 0.], [0., 1., 0., ..., 0., 0., 0.], [0., 0., 1., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 1., 0., 0.], [0., 0., 0., ..., 0., 1., 0.], [0., 0., 0., ..., 0., 0., 1.]])

This first 0-degree polynomial is followed by the next degree polynomials: hence `F[d:d*2, :]`

is the 1-degree polynomial, `F[d*2:d*3, :]`

is the 2-degree polynomial and so on...

For details about the Chebyshev polynomials, including their definition by a recurrence relationship (that is used here behind the scene), you can look at this wikipedia article.

One of the most important thing to note about a polynomial basis Faust is that the multiplication by a vector or a matrix is specialized in order to optimize the performance obtained compared to the generic Faust-vector/matrix multiplication. Indeed, due to the particular structure of the polynomial basis Faust, the multiplication can be optimized.

Let's verify that is true! In the code below F is cloned to a classic Faust G and the time of the multiplication by a matrix is measured in the both cases (with F, the basis Faust and G its classic Faust copy). Note that `Faust.clone`

function is not used because in this case it would return a polynomial basis Faust (after all that's the role of `clone`

to preserve the Faust properties!). To get a classic Faust the only way is to copy the Faust factor by factor.

In [3]:

```
from numpy.random import rand
from pyfaust import Faust
factors = [F.factors(i) for i in range(F.numfactors())]
G = Faust(factors)
X = rand(F.shape[1],100)
%timeit F@X
%timeit G@X
```

Now let's verify the multiplication result is accurate:

In [4]:

```
from numpy.linalg import norm
print("err=", norm(F@X-G@X)/norm(F@X))
```

err= 1.226853939966969e-16

As you see it's alright.

The second function of the `pyfaust.poly`

module is `poly`

. This function purpose is to compute a linear combination of polynomials composing a `FaustPoly`

(it can also be viewed as a way to compute a polynomial). So passing the `FaustPoly`

and the linear combination coefficients (one per polynomial, in ascending degree order) you'll obtain:

In [5]:

```
from pyfaust.poly import poly
from numpy import array
coeffs = rand(K+1)*100
lc_F = poly(coeffs, F)
lc_F
```

Out[5]:

Faust size 128x128, density 5.18115, nnz_sum 84888, 7 factor(s): - FACTOR 0 (real) SPARSE, size 128x768, density 0.0078125, nnz 768 - FACTOR 1 (real) SPARSE, size 768x640, density 0.0347493, nnz 17080 - FACTOR 2 (real) SPARSE, size 640x512, density 0.0517334, nnz 16952 - FACTOR 3 (real) SPARSE, size 512x384, density 0.0855713, nnz 16824 - FACTOR 4 (real) SPARSE, size 384x256, density 0.16984, nnz 16696 - FACTOR 5 (real) SPARSE, size 256x128, density 0.501709, nnz 16440 - FACTOR 6 (real) SPARSE, size 128x128, density 0.0078125, nnz 128 identity matrix flag

To be explicit about `lc_F`

let's show how to rebuild it manually using G (which again is a classic Faust equal to F).

In [6]:

```
from scipy.sparse import eye
from scipy.sparse import hstack
from pyfaust import Faust
lc_G = coeffs[0]*G[:d,:]
for i in range(1, K+1):
lc_G += coeffs[i]*G[d*i:d*(i+1),:]
print("error lc_G/lc_F:", (lc_F-lc_G).norm()/(lc_G).norm())
```

error lc_G/lc_F: 4.461651547604153e-16

Here again the `FaustPoly`

operation is optimized compared to the `Faust`

one. Speaking of which, there is ways to do even more optimized because the `poly`

function is kind of matrix type agnostic, or precisely, it accepts a `FaustPoly`

or a `numpy.ndarray`

as the basis argument. Doing with the latter an optimized implementation is used whose the memory footprint is smaller than the one consumed with a `FaustPoly`

. It can be particulary efficient when the use cases (as we'll see in 3.) consist to apply a linear combination of F to a vector x as it's shown below.

In [7]:

```
x = rand(F.shape[1])
way1 = lambda: poly(coeffs, F)@x # first way to do as we've done above
way2 = lambda: poly(coeffs, F@x) # second way to do (that is quicker)
way3 = lambda: poly(coeffs, F, X=x) # third way to do (it's even quicker)
```

In [8]:

```
%timeit way1()
```

124 µs ± 4.63 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [9]:

```
%timeit way2()
```

96.3 µs ± 1.88 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [10]:

```
%timeit way3()
```

78.8 µs ± 467 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Depending on the situation the `way2`

or `way3`

might be the quickest but they are always both quicker than `way1`

.

Just in case let's verify all ways give the same results.

In [11]:

```
print("err way2 =", norm(poly(coeffs, F)@x-poly(coeffs, F@x))/norm(poly(coeffs, F)@x))
```

err way2 = 1.2851224259408018e-16

In [12]:

```
print("err way3 =", norm(poly(coeffs, F)@x-poly(coeffs, F, X=x))/norm(poly(coeffs, F)@x))
```

err way3 = 1.2851224259408018e-16

All sounds good! We shall now introduce one use case of Chebyshev polynomial series in FAµST that allow to get interesting results compared to what we can do in the numpy/scipy ecosystem. But to note a last thing before going ahead to the part 3. is that the function poly is a little more complicated that it looks like, for more details I invite you to consult the API documentation.

In the next, we'll see how to compute action of the exponential matrix on a vector x. However this time we'll do the comparison with the scipy function `scipy.sparse.linalg.expm_multiply`

.
The both functions are intended to compute the action of the exponential matrix on a vector or matrix. Recalling that it consists to compute $exp(t A)x$ without computing directly the exponential let's compare the use, performance and accuracy of these functions.

The main difference between the two of them, is that in pyfaust the time points are passed directly as a list to the function, while the scipy version accepts only on `np.linspace`

style arguments (to define the points as a regular space).

In [13]:

```
from scipy.sparse.linalg import expm_multiply as scipy_expm_multiply
from pyfaust.poly import expm_multiply
from numpy import exp, linspace
from numpy.linalg import norm
from numpy.random import rand
from scipy.sparse import random
S = random(1024, 1024, .002, format='csr')
A = S@S.T
X = rand(A.shape[1], 1)
pts_args = {'start':-.5, 'stop':-.2, 'num':1000}
pts = linspace(**pts_args)
y1 = expm_multiply(A, X, pts)
y2 = scipy_expm_multiply(A, X, **pts_args)
print("pyfaust error:", norm(y1-y2)/norm(y2))
%timeit expm_multiply(A, X, pts)
%timeit scipy_expm_multiply(A, X, **pts_args)
```

It is rather good for `pyfaust`

but note that there is some drawbacks to its `expm_multiply`

implementation. You'll find them among other details in the API documentation.

Thanks for reading this notebook! Many other are available at faust.inria.fr.

**Note**: this notebook was executed using the following pyfaust version:

In [14]:

```
import pyfaust
pyfaust.version()
```

Out[14]:

'3.1.0'