This notebook is intended to gently introduce the operations available to manipulate a Faust object. It comes after the first notebook (available here for download or here as a web page), so it's assumed you already know how to create a Faust object from one way or another.

Keep in mind that a full API doc is available here every time you need details. In particular the Faust class is documented here.

**NOTE**: the notebook is made to be executed sequentially, otherwise, skipping some cells, you would end up on an import error.

**1. Getting Basic Information about a Faust Object**

1.1 Obtaining Dimension and Scalar Type Information

1.2 Obtaining Other Faust Specific Information

1.3 Plotting a Faust

1.4 About Sparsity!

**2. Faust Algebra and other Operations**

2.1 Transpose, conjugate, transconjugate

2.2 Add, Subtract and Multiply

2.3 Faust Multiplication by a Vector or a Matrix

2.4 Faust Norms

2.5 Faust Normalizations

2.6 Faust Concatenation

2.7 Faust Indexing and Slicing

First of all, given any object, you might ask yourself if it's a Faust or not (typically when you receive an object in a function, python being built on dynamic types, you can't say for sure it's a Faust). Faust.isFaust() is the function to verify an object is a Faust. Its use is straighforward as you can see in the documentation. Note by the way, that a more accessible alias is available at the package root (pyfaust.isFaust).

Firstly, let's list basic Faust informative methods/attributes you're probably used to for numpy arrays:

To keep it really clear, let's show some examples performed on a random Faust.

In [1]:

```
from pyfaust import rand
F = rand(5,10)
print(F)
```

In [2]:

```
F.shape
```

Out[2]:

(5, 10)

In [3]:

```
F.size
```

Out[3]:

50

In [4]:

```
F.dtype
```

Out[4]:

dtype('float64')

If the attributes printed out above seem not clear to you, you're probably not a numpy user. Anyway you'll find all descriptive informations in the FAµST API documentation (see the links above).

As a complement, you can also refer to the numpy API documentation:

About shape, it's noteworthy that contrary to what numpy is capable of, you *cannot* reshape a Faust.

As you've seen in this notebook and the first one, when you print a Faust object, several pieces of information appear. Most of them are also available independently with specific functions.

For instance, if you want information about factors, nothing is more simple than calling directly the next functions:

- numfactors() ; which gives you the number of factors (aka layers) a Faust object is composed of.
- factors() ; which allows you to copy any of the Faust's factors given its index.

Going back to our F object, let's call these functions:

In [5]:

```
F.numfactors()
```

Out[5]:

5

For example, try to copy the third factor:

In [6]:

```
f3 = F.factors(2)
f3
```

Out[6]:

<8x7 sparse matrix of type '<class 'numpy.float64'>' with 40 stored elements in Compressed Sparse Row format>

Note that, since FAµST 2.3, the function doesn't alterate the factor format. If the Faust object contains a sparse factor then you'll receive a sparse (CSR) matrix.

Since FAµST 2.3 again, it's possible to retrieve a sub-sequence of Faust factors.

Go straight to the example, extracting factors from F:

In [7]:

```
F.factors(range(2,4))
```

Out[7]:

Faust size 8x5, density 1.875, nnz_sum 75, 2 factor(s): - FACTOR 0 (double) SPARSE, size 8x7, density 0.714286, nnz 40 - FACTOR 1 (double) SPARSE, size 7x5, density 1, nnz 35

Hmm... something is different from the previous example. We indeed received a Faust as return type, great! You've just learned another way to create a Faust from another, additionally to what you've seen in the first notebook.

Without this function, you'd surely have written something similar to:

In [8]:

```
from pyfaust import Faust
F2 = Faust([F.factors(2), F.factors(3)])
F2
```

Out[8]:

OK, that's not awful but I let you imagine how much complicated it is with more factors (even with a list comprehension, it's longer to write).

It's quite useful to print a Faust as we've seen before, calling `print(F)`

or `F.display()`

or just `F`

in an interactive terminal but this is wordy.
How about plotting a Faust in a more graphical fashion?

In [9]:

```
import matplotlib.pyplot as plt
%matplotlib inline
F.imshow()
plt.show()
```

What do we see above? On the bottom right is the dense matrix associated to F, obtained with `F.toarray()`

. On the top are the indexed factors of F.
Note that you can change the default colormap in matplotlib parameters.

Let's look at a last example:

In [10]:

```
from pyfaust import Faust
from numpy import eye
Faust([eye(5,4),eye(4,10)]).imshow()
```

The dimension ratio of the factors is respected in the figure.

Three functions of the Faust class are here to evaluate the sparsity of a Faust object.

Let's call the first one:

In [11]:

```
F.nnz_sum()
```

Out[11]:

170

I'm sure you guessed exactly what the function returns, if you doubt it, here is the doc: Faust.nnz_sum(). The smaller `nnz_sum`

, the sparser the Faust.

Next comes the function: Faust.density().

This function along with its reciprocal Faust.rcg() can give you a big hint on how much your Faust is potentially optimized both for storage and calculation. The sparser the Faust, the larger the Relative Complexity Gain (RCG)!

In [12]:

```
F.density()
```

Out[12]:

3.4

In [13]:

```
F.rcg()
```

Out[13]:

0.29411764705882354

According to its RCG, this Faust doesn't seem to be of any help for optimization but look at the graphic the next script generates:

In [14]:

```
from pyfaust import Faust, rand
import numpy as np
import os
import matplotlib.pyplot as plt
nfactors = 3
startd = 0.01
endd = 1
dim_sz = 1000
sizes = []
rcs = []
ntests = 10
for i, d in zip(
list(range(0, ntests)),
np.linspace(startd, endd, ntests)):
F = rand(dim_sz, dim_sz, nfactors, density=d, fac_type='sparse')
filepath = 'test_faust_size'+str(i)+'.mat'
F.save(filepath)
stat = os.stat(filepath)
sizes.append(stat.st_size)
rcs.append(F.density())
os.remove(filepath)
plt.title('File Size vs Density for Pure-Sparse Fausts \n('+str(nfactors)+' '
'random factors '+str(dim_sz)+'x'+str(dim_sz)+')')
plt.xlabel('Density')
plt.ylabel('File Size (bytes)')
plt.rcParams['figure.figsize'] = [8.0, 6]
plt.tight_layout()
plt.plot(rcs, sizes)
plt.show()
```

Isn't it obvious now that the smaller the density the better?! Indeed, for two Faust objects of the same shape and the same number of factors, a smaller density (linearly) implies a smaller file size for storage. This point applies also to the memory (RAM) space to work on a Faust.

We'll see later how the computation can benefit of a larger RCG (or smaller density). But let's point out right now that when it comes to matrix factorizations the sparsity is often a tradeoff with accuracy, as the following plot shows about the truncated SVD of a matrix $M \in {\mathbb R}^{512 \times 512}$. Note beforehand that the SVD of M (truncated or not) can be seen as a Faust which approximates M.

`Faust([U,S,Vt])`

Here is the script to reproduce the last figure: test_svd_rc_vs_err.py

In order to write some nice algorithms using Faust objects, you'll have to use the basic "stable" operations a Faust is capable of. Let's make a tour of them.

You are probably familiar with T and H attributes/properties from numpy/scipy. Well, they are also used in the Faust class.

In [15]:

```
G = rand(10, 15, dim_sizes=[10,15],field='complex')
```

In [16]:

```
G.T
```

Out[16]:

Faust size 15x10, density 1.88, nnz_sum 282, 5 factor(s): - FACTOR 0 (complex) SPARSE, size 15x10, density 0.333333, nnz 50 - FACTOR 1 (complex) SPARSE, size 10x12, density 0.5, nnz 60 - FACTOR 2 (complex) SPARSE, size 12x13, density 0.333333, nnz 52 - FACTOR 3 (complex) SPARSE, size 13x14, density 0.384615, nnz 70 - FACTOR 4 (complex) SPARSE, size 14x10, density 0.357143, nnz 50

In [17]:

```
G.conj()
```

Out[17]:

Faust size 10x15, density 1.88, nnz_sum 282, 5 factor(s): - FACTOR 0 (complex) SPARSE, size 10x14, density 0.357143, nnz 50 - FACTOR 1 (complex) SPARSE, size 14x13, density 0.384615, nnz 70 - FACTOR 2 (complex) SPARSE, size 13x12, density 0.333333, nnz 52 - FACTOR 3 (complex) SPARSE, size 12x10, density 0.5, nnz 60 - FACTOR 4 (complex) SPARSE, size 10x15, density 0.333333, nnz 50

In [18]:

```
G.H
```

Out[18]:

What really matters here is that the results of `G.T`

, `G.conj()`

and `G.H`

are all Faust objects. Behind the scene, there is just one memory zone allocated to the factors. Strictly speaking they are memory views shared between `G`

, `G.T`

and `G.H`

. So don't hesitate to use!

In [19]:

```
F = rand(G.shape[0], G.shape[0])
G = rand(G.shape[0], G.shape[0], field='complex')
F+G
```

Out[19]:

Faust size 10x10, density 5.4, nnz_sum 540, 7 factor(s): - FACTOR 0 (complex) SPARSE, size 10x20, density 0.1, nnz 20 - FACTOR 1 (complex) SPARSE, size 20x20, density 0.25, nnz 100 - FACTOR 2 (complex) SPARSE, size 20x20, density 0.25, nnz 100 - FACTOR 3 (complex) SPARSE, size 20x20, density 0.25, nnz 100 - FACTOR 4 (complex) SPARSE, size 20x20, density 0.25, nnz 100 - FACTOR 5 (complex) SPARSE, size 20x20, density 0.25, nnz 100 - FACTOR 6 (complex) DENSE, size 20x10, density 0.1, nnz 20

Go ahead and verify it's accurate.

In [20]:

```
from numpy.linalg import norm
norm((F+G).toarray()-(F.toarray()+G.toarray()))
```

Out[20]:

0.0

Some points are noticeable here:

- F is real (dtype float), but G is complex. The Faust API is able to return the proper type for the resulting Faust, that is a complex Faust.
- F+G is composed of 8 factors, however F and G are both 5 factors long. It's due to the way the addition is implemented (Faust concatenation is hiding behind).

Subtracting is not different:

In [21]:

```
F-G
```

Out[21]:

You can also add/subtract scalars to Faust objects.

In [22]:

```
F+2
```

Out[22]:

Faust size 10x10, density 3.4, nnz_sum 340, 7 factor(s): - FACTOR 0 (double) SPARSE, size 10x20, density 0.1, nnz 20 - FACTOR 1 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 2 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 3 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 4 (double) SPARSE, size 20x11, density 0.272727, nnz 60 - FACTOR 5 (double) SPARSE, size 11x20, density 0.272727, nnz 60 - FACTOR 6 (double) DENSE, size 20x10, density 0.1, nnz 20

Note that here again `(F+2).numfactors() != F.numfactors()`

But conversely you can also add/subtract a Faust to a scalar (pay attention to the order of operands):

In [23]:

```
2+F
```

Out[23]:

In [24]:

```
2-F
```

Out[24]:

The FAµST API supports equally the Faust to array addition and subtraction.

In [25]:

```
F.toarray()+F # or F+F.toarray()
```

Out[25]:

Faust size 10x10, density 4.3, nnz_sum 430, 7 factor(s): - FACTOR 0 (double) SPARSE, size 10x20, density 0.1, nnz 20 - FACTOR 1 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 2 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 3 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 4 (double) SPARSE, size 20x20, density 0.15, nnz 60 - FACTOR 5 (double) SPARSE, size 20x20, density 0.375, nnz 150 - FACTOR 6 (double) DENSE, size 20x10, density 0.1, nnz 20

In [26]:

```
F-F.toarray() # F.toarray()-F is supported too
```

Out[26]:

In [27]:

```
from numpy import zeros, allclose
allclose((F.toarray()-F).toarray(), zeros(F.shape))
```

Out[27]:

True

Now let's multiply these Fausts!

In [28]:

```
FG = F@G
```

In [29]:

```
norm(FG.toarray()-F.toarray()@G.toarray())/norm(F.toarray()@G.toarray())
```

Out[29]:

1.357283865229532e-16

The relative error proves it's working. FG is a Faust too!

In [30]:

```
Faust.isFaust(FG)
```

Out[30]:

True

Faust scalar multiplication is also available and here again the result is a Faust object!

In [31]:

```
F*2 # or 2*F
```

Out[31]:

Faust size 10x10, density 2.5, nnz_sum 250, 5 factor(s): - FACTOR 0 (double) SPARSE, size 10x10, density 0.5, nnz 50 - FACTOR 1 (double) SPARSE, size 10x10, density 0.5, nnz 50 - FACTOR 2 (double) SPARSE, size 10x10, density 0.5, nnz 50 - FACTOR 3 (double) SPARSE, size 10x10, density 0.5, nnz 50 - FACTOR 4 (double) SPARSE, size 10x10, density 0.5, nnz 50

When you multiply a Faust by a vector or a matrix (the number of rows must match the number of Faust columns), you'll get respectively a vector or a matrix as result.

In [32]:

```
from numpy.random import rand as nrand
vec = nrand(F.shape[1],1)
F@vec # or F.dot(vec)
```

Out[32]:

array([[39.78053759], [47.00405106], [36.81261006], [30.23773499], [49.56504215], [49.29512613], [57.56344501], [58.59859226], [37.88441204], [26.8221804 ]])

Let's launch a timer to compare the execution times of Faust-vector multiplication and Faust's dense matrix-vector multiplication

Note that appliying a Faust on a `numpy.ndarray`

can also be done with the function dot().

In [33]:

```
timeit F@vec
```

12.2 µs ± 181 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

In [34]:

```
FD = F.toarray()
```

In [35]:

```
timeit FD@vec
```

1.72 µs ± 40.3 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

In [36]:

```
from numpy import allclose
allclose(F@vec,FD@vec)
```

Out[36]:

True

In [37]:

```
F.rcg()
```

Out[37]:

0.4

When the RCG is lower than 1 the Faust-vector multiplication is slower. Making a random Faust with a large RCG (small density) shows better results.

In [38]:

```
G = rand(1024, 1024, num_factors=3, density=.001, fac_type='sparse')
GD = G.toarray()
vec2 = nrand(1024, 1)
```

In [39]:

```
timeit G@vec2
```

20.8 µs ± 992 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [40]:

```
timeit GD.dot(vec2)
```

513 µs ± 118 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

In [41]:

```
G.rcg()
```

Out[41]:

341.3333333333333

It goes without saying that a big RCG gives a big speedup to the Faust-vector multiplication relatively to the corresponding (dense) matrix-vector multiplication. I hope the example above has finished to convince you.

Just to convince you as well of the Faust-vector multiplication accuracy:

In [42]:

```
from numpy.linalg import norm
norm(G@vec2 - GD@vec2)
```

Out[42]:

3.245120319717064e-16

What applies to Faust-vector multiplication remains valid about Faust-matrix multiplication. Take a look:

In [43]:

```
M = np.asfortranarray(nrand(1024,32))
```

**NOTE**: numpy.asfortranarray is used to convert the array from row-major order to column-major order. Indeed pyfaust works on matrix in the latter type of storage, so it's faster to multiply a matrix that is directly set up in this format. Otherwise the conversion is made on the fly in the multiplication (which is slower than doing it once and for all if you repeat the computation).

In [44]:

```
timeit G@M
```

304 µs ± 17.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

In [45]:

```
timeit GD@M
```

2.47 ms ± 169 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [46]:

```
norm(GD@M-G@M)/norm(GD@M)
```

Out[46]:

1.1192152490222897e-16

Well, what do we see? A quicker Faust-matrix multiplication than the matrix-matrix corresponding multiplication, though a good accuracy of the Faust-matrix multiplication is also clearly confirmed.

These examples are somehow theoretical because we cherry-pick the Faust to ensure that the RCG is good to accelerate the muplication, but at least it shows the potential speedup using Faust objects.

The Faust class provides a norm function which handles different types of norms.
This function is really close to `numpy.linalg.norm`

function.

In the following example, three of the four norms available are computed.

In [47]:

```
F.norm(1)
```

Out[47]:

162.5500816011077

In [48]:

```
F.norm(np.inf)
```

Out[48]:

110.03788525553506

In [49]:

```
F.norm('fro')
```

Out[49]:

92.07431619126986

Now, check the values are not far from the Faust's dense matrix.

In [50]:

```
from numpy.linalg import norm
norm(F.toarray(),1)
```

Out[50]:

162.55008160110765

In [51]:

```
norm(F.toarray(), np.inf)
```

Out[51]:

110.03788525553506

In [52]:

```
norm(F.toarray(), 'fro')
```

Out[52]:

92.07431619126986

Perfect! But a last norm is available, this is the Faust's 2-norm. Let's see in the next small benchmark how the Faust 2-norm is being computed faster than the Faust's dense matrix 2-norm.

In [53]:

```
timeit -n 10 G.norm(2)
```

808 µs ± 124 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [54]:

```
timeit -n 10 norm(GD,2) # sorry, it could be slow
```

445 ms ± 74.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

The power-iteration algorithm implemented in the FAµST C++ core is faster on G and the relative error is not bad too. The norm computation is faster as it benefits from faster Faust-vector multiplication

In [55]:

```
err = abs((G.norm(2)-norm(GD,2))/norm(GD,2))
err
```

Out[55]:

0.0012471190552867473

The FAµST API proposes a group of normalizations. They correspond to the norms available and discussed above.

It's possible to normalize along columns or rows with any type of these norms.

In [56]:

```
F = rand(5,10)
NF = F.normalize()
```

The API doc is here.

What's interesting here is the fact that `Faust.normalize`

returns a Faust object. Combined with slicing (that we will see soon), `normalize`

is useful to write algorithms such as Orthonormal Matching Pursuit (OMP), which require matrices with L2-normalized columns, in a way that makes them able to leverage the acceleration offered by the FAµST API.

The normalization coded in C++ is memory optimized (it never builds the dense matrix `F.toarray()`

to compute the norms of the columns/rows). In the same goal the factors composing the Faust object NF are not duplicated in memory from same factors F, they're used as is with an additional factor giving the appropriate scaling.

In [57]:

```
F
```

Out[57]:

Faust size 5x10, density 3.2, nnz_sum 160, 5 factor(s): - FACTOR 0 (double) SPARSE, size 5x9, density 0.555556, nnz 25 - FACTOR 1 (double) SPARSE, size 9x8, density 0.625, nnz 45 - FACTOR 2 (double) SPARSE, size 8x5, density 1, nnz 40 - FACTOR 3 (double) SPARSE, size 5x6, density 0.666667, nnz 20 - FACTOR 4 (double) SPARSE, size 6x10, density 0.5, nnz 30

In [58]:

```
NF
```

Out[58]:

In [59]:

```
NF.factors(4)
```

Out[59]:

<6x10 sparse matrix of type '<class 'numpy.float64'>' with 30 stored elements in Compressed Sparse Row format>

In [60]:

```
cumerr = 0
for i in range(0,F.shape[1]):
cumerr += norm(NF.toarray()[:,i]-F.toarray()[:,i]/norm(F.toarray()[:,i]))
cumerr
```

Out[60]:

1.394878514319214e-15

And as you see it works!

You're probably aware of numpy arrays concatenation otherwise look this example.

In [61]:

```
from numpy.random import rand as nrand
from numpy import eye, concatenate
M = nrand(5,5)
I = eye(5,5)
concatenate((M,I))
```

Out[61]:

array([[0.87862715, 0.85313881, 0.10985621, 0.58527366, 0.49914055], [0.93390403, 0.73894959, 0.27517946, 0.76649485, 0.92851079], [0.61050343, 0.78967785, 0.56667302, 0.60019295, 0.79245053], [0.53023317, 0.67287818, 0.63191559, 0.32523208, 0.17308587], [0.08121924, 0.23552087, 0.74639429, 0.85229953, 0.14574267], [1. , 0. , 0. , 0. , 0. ], [0. , 1. , 0. , 0. , 0. ], [0. , 0. , 1. , 0. , 0. ], [0. , 0. , 0. , 1. , 0. ], [0. , 0. , 0. , 0. , 1. ]])

In [62]:

```
# it was vertical concatenation, now let's concatenate horizontally
concatenate((M,I), axis=1)
```

Out[62]:

array([[0.87862715, 0.85313881, 0.10985621, 0.58527366, 0.49914055, 1. , 0. , 0. , 0. , 0. ], [0.93390403, 0.73894959, 0.27517946, 0.76649485, 0.92851079, 0. , 1. , 0. , 0. , 0. ], [0.61050343, 0.78967785, 0.56667302, 0.60019295, 0.79245053, 0. , 0. , 1. , 0. , 0. ], [0.53023317, 0.67287818, 0.63191559, 0.32523208, 0.17308587, 0. , 0. , 0. , 1. , 0. ], [0.08121924, 0.23552087, 0.74639429, 0.85229953, 0.14574267, 0. , 0. , 0. , 0. , 1. ]])

I'm sure you guessed that likewise you can concatenate Faust objects. That's right!

In [63]:

```
F.concatenate(F)
```

Out[63]:

Faust size 10x10, density 3.4, nnz_sum 340, 6 factor(s): - FACTOR 0 (double) SPARSE, size 10x18, density 0.277778, nnz 50 - FACTOR 1 (double) SPARSE, size 18x16, density 0.3125, nnz 90 - FACTOR 2 (double) SPARSE, size 16x10, density 0.5, nnz 80 - FACTOR 3 (double) SPARSE, size 10x12, density 0.333333, nnz 40 - FACTOR 4 (double) SPARSE, size 12x20, density 0.25, nnz 60 - FACTOR 5 (double) SPARSE, size 20x10, density 0.1, nnz 20

In [64]:

```
C = F.concatenate(F, axis=1)
```

In [65]:

```
C.toarray()-concatenate((F.toarray(), F.toarray()),axis=1)
```

Out[65]:

array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

The difference of the two concatenations is full of zeros, so of course it works!

As you noticed the Faust concatenation is stable, you give two Fausts and you get a Faust again. Besides, it's possible to concatenate an arbitrary number of Faust objects.

In [66]:

```
F.concatenate(F, C, C, F, axis=1)
```

Out[66]:

Faust size 5x70, density 3.37143, nnz_sum 1180, 7 factor(s): - FACTOR 0 (double) SPARSE, size 5x25, density 0.2, nnz 25 - FACTOR 1 (double) SPARSE, size 25x35, density 0.04, nnz 35 - FACTOR 2 (double) SPARSE, size 35x63, density 0.0793651, nnz 175 - FACTOR 3 (double) SPARSE, size 63x56, density 0.0892857, nnz 315 - FACTOR 4 (double) SPARSE, size 56x35, density 0.142857, nnz 280 - FACTOR 5 (double) SPARSE, size 35x42, density 0.0952381, nnz 140 - FACTOR 6 (double) SPARSE, size 42x70, density 0.0714286, nnz 210

As an exercise, you can write the factors of the Faust `F.concatenate(F)`

, F being any Faust.

**Hint**: the block-diagonal matrices are around here.

Sometimes you need to access the dense matrix corresponding to a Faust or an element of it (by the way, note that it's costly).

Let's access a Faust item:

In [67]:

```
F[2,3]
```

Out[67]:

2.614726285842496

Why is it costly? Because it essentially converts the Faust to its dense form (modulo some optimizations) to just access one item.

In [68]:

```
timeit F[2,3]
```

6.68 µs ± 111 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

In [69]:

```
timeit FD[2,3]
```

110 ns ± 2.54 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

It's totally the same syntax as numpy but much slower so use it with care.

The more advanced slicing operation uses also the same syntax as numpy:

In [70]:

```
F[2:5, 3:10]
```

Out[70]:

Faust size 3x7, density 6.7619, nnz_sum 142, 5 factor(s): - FACTOR 0 (double) SPARSE, size 3x9, density 0.555556, nnz 15 - FACTOR 1 (double) SPARSE, size 9x8, density 0.625, nnz 45 - FACTOR 2 (double) SPARSE, size 8x5, density 1, nnz 40 - FACTOR 3 (double) SPARSE, size 5x6, density 0.666667, nnz 20 - FACTOR 4 (double) SPARSE, size 6x7, density 0.52381, nnz 22

Here again, the result is another Faust. But this is not a full copy, it makes profit of memory views implemented behind in C++. Solely the first and last factors of the sliced Faust are new in memory, the others are just referenced from the initial Faust F. So use it with no worry for a Faust with a lot of factors!

The numpy's fancy indexing has also been implemented in the FAµST C++ core, let's try it:

In [71]:

```
FI = F[[1,3,2],:]
Faust.isFaust(FI)
```

Out[71]:

True

Again, it's a Faust but is it really working? Verify!

In [72]:

```
(FI.toarray()[0] == F.toarray()[1]).all() and \
(FI.toarray()[1] == F.toarray()[3]).all() and \
(FI.toarray()[2] == F.toarray()[2]).all()
```

Out[72]:

True

Yes it is!

As a complement about Faust indexing, you can take a look at this FAQ entry. It clarifies the use of indexing form F[I, J] (I and J being lists of indices). Actually, this indexing method is not implemented in pyfaust, the FAQ entry explains why and suggests another form of indexing that is easy to confuse with.

This is the notebook's end, you have now a global view of what the Faust class is able and what kind of high-level algorithms it is ready for. You might be interested to read other notebooks, just go back to the page where you got this one.

**Note**: this notebook was executed using the following pyfaust version:

In [73]:

```
import pyfaust
pyfaust.version()
```

Out[73]:

'3.27.0'