How to Manipulate a Faust

This notebook is intended to gently introduce the operations available to manipulate a Faust object. It comes after the first notebook (available here for download or here as a web page), so it's assumed you already know how to create a Faust object from one way or another.

Keep in mind that a full API doc is available here every time you need details. In particular the Faust class is documented here.

NOTE: the notebook is made to be executed sequentially, otherwise, skipping some cells, you would end up on an import error.

Table of Contents:

1. Getting Basic Information about a Faust Object
1.1 Obtaining Dimension and Scalar Type Information
1.2 Obtaining Other Faust Specific Information
1.3 Plotting a Faust
1.4 About Sparsity!

2. Faust Algebra and other Operations
2.1 Transpose, conjugate, transconjugate
2.2 Add, Subtract and Multiply
2.3 Faust Multiplication by a Vector or a Matrix
2.4 Faust Norms
2.5 Faust Normalizations
2.6 Faust Concatenation
2.7 Faust Indexing and Slicing

1. Getting Basic Information about a Faust Object

First of all, given any object, you might ask yourself if it's a Faust or not (typically when you receive an object in a function, python being built on dynamic types, you can't say for sure it's a Faust). Faust.isFaust() is the function to verify an object is a Faust. Its use is straighforward as you can see in the documentation. Note by the way, that a more accessible alias is available at the package root (pyfaust.isFaust).

1.1 Obtaining Dimension and Scalar Type Information

Firstly, let's list basic Faust informative methods/attributes you're probably used to for numpy arrays:

To keep it really clear, let's show some examples performed on a random Faust.

If the attributes printed out above seem not clear to you, you're probably not a numpy user. Anyway you'll find all descriptive informations in the FAµST API documentation (see the links above).

As a complement, you can also refer to the numpy API documentation:

About shape, it's noteworthy that contrary to what numpy is capable of, you cannot reshape a Faust.

1.2 Obtaining Other Faust Specific Information

As you've seen in this notebook and the first one, when you print a Faust object, several pieces of information appear. Most of them are also available independently with specific functions.

For instance, if you want information about factors, nothing is more simple than calling directly the next functions:

Going back to our F object, let's call these functions:

For example, try to copy the third factor:

Note that, since FAµST 2.3, the function doesn't alterate the factor format. If the Faust object contains a sparse factor then you'll receive a sparse (CSR) matrix.

Since FAµST 2.3 again, it's possible to retrieve a sub-sequence of Faust factors.

Go straight to the example, extracting factors from F:

Hmm... something is different from the previous example. We indeed received a Faust as return type, great! You've just learned another way to create a Faust from another, additionally to what you've seen in the first notebook.

Without this function, you'd surely have written something similar to:

OK, that's not awful but I let you imagine how much complicated it is with more factors (even with a list comprehension, it's longer to write).

1.3 Plotting a Faust

It's quite useful to print a Faust as we've seen before, calling print(F) or F.display() or just F in an interactive terminal but this is wordy. How about plotting a Faust in a more graphical fashion?

What do we see above? On the bottom right is the dense matrix associated to F, obtained with F.toarray(). On the top are the indexed factors of F. Note that you can change the default colormap in matplotlib parameters.

Let's look at a last example:

The dimension ratio of the factors is respected in the figure.

1.4 About Sparsity!

Three functions of the Faust class are here to evaluate the sparsity of a Faust object.

Let's call the first one:

I'm sure you guessed exactly what the function returns, if you doubt it, here is the doc: Faust.nnz_sum(). The smaller nnz_sum, the sparser the Faust.

Next comes the function: Faust.density().

This function along with its reciprocal Faust.rcg() can give you a big hint on how much your Faust is potentially optimized both for storage and calculation. The sparser the Faust, the larger the Relative Complexity Gain (RCG)!

According to its RCG, this Faust doesn't seem to be of any help for optimization but look at the graphic the next script generates:

Isn't it obvious now that the smaller the density the better?! Indeed, for two Faust objects of the same shape and the same number of factors, a smaller density (linearly) implies a smaller file size for storage. This point applies also to the memory (RAM) space to work on a Faust.

We'll see later how the computation can benefit of a larger RCG (or smaller density). But let's point out right now that when it comes to matrix factorizations the sparsity is often a tradeoff with accuracy, as the following plot shows about the truncated SVD of a matrix $M \in {\mathbb R}^{512 \times 512}$. Note beforehand that the SVD of M (truncated or not) can be seen as a Faust which approximates M.

Faust([U,S,Vt])

Here is the script to reproduce the last figure: test_svd_rc_vs_err.py

2. Faust Algebra and other Operations

In order to write some nice algorithms using Faust objects, you'll have to use the basic "stable" operations a Faust is capable of. Let's make a tour of them.

2.1 Transpose, conjugate, transconjugate

You are probably familiar with T and H attributes/properties from numpy/scipy. Well, they are also used in the Faust class.

What really matters here is that the results of G.T, G.conj() and G.H are all Faust objects. Behind the scene, there is just one memory zone allocated to the factors. Strictly speaking they are memory views shared between G, G.T and G.H. So don't hesitate to use!

2.2 Add, Subtract and Multiply

Go ahead and verify it's accurate.

Some points are noticeable here:

Subtracting is not different:

You can also add/subtract scalars to Faust objects.

Note that here again (F+2).numfactors() != F.numfactors()

But conversely you can also add/subtract a Faust to a scalar (pay attention to the order of operands):

The FAµST API supports equally the Faust to array addition and subtraction.

Now let's multiply these Fausts!

The relative error proves it's working. FG is a Faust too!

Faust scalar multiplication is also available and here again the result is a Faust object!

2.3 Faust Multiplication by a Vector or a Matrix

When you multiply a Faust by a vector or a matrix (the number of rows must match the number of Faust columns), you'll get respectively a vector or a matrix as result.

Let's launch a timer to compare the execution times of Faust-vector multiplication and Faust's dense matrix-vector multiplication

Note that appliying a Faust on a numpy.ndarray can also be done with the function dot().

When the RCG is lower than 1 the Faust-vector multiplication is slower. Making a random Faust with a large RCG (small density) shows better results.

It goes without saying that a big RCG gives a big speedup to the Faust-vector multiplication relatively to the corresponding (dense) matrix-vector multiplication. I hope the example above has finished to convince you.

Just to convince you as well of the Faust-vector multiplication accuracy:

What applies to Faust-vector multiplication remains valid about Faust-matrix multiplication. Take a look:

NOTE: numpy.asfortranarray is used to convert the array from row-major order to column-major order. Indeed pyfaust works on matrix in the latter type of storage, so it's faster to multiply a matrix that is directly set up in this format. Otherwise the conversion is made on the fly in the multiplication (which is slower than doing it once and for all if you repeat the computation).

Well, what do we see? A quicker Faust-matrix multiplication than the matrix-matrix corresponding multiplication, though a good accuracy of the Faust-matrix multiplication is also clearly confirmed.

These examples are somehow theoretical because we cherry-pick the Faust to ensure that the RCG is good to accelerate the muplication, but at least it shows the potential speedup using Faust objects.

2.4 Faust Norms

The Faust class provides a norm function which handles different types of norms. This function is really close to numpy.linalg.norm function.

In the following example, three of the four norms available are computed.

Now, check the values are not far from the Faust's dense matrix.

Perfect! But a last norm is available, this is the Faust's 2-norm. Let's see in the next small benchmark how the Faust 2-norm is being computed faster than the Faust's dense matrix 2-norm.

The power-iteration algorithm implemented in the FAµST C++ core is faster on G and the relative error is not bad too. The norm computation is faster as it benefits from faster Faust-vector multiplication

2.5 Faust Normalizations

The FAµST API proposes a group of normalizations. They correspond to the norms available and discussed above.

It's possible to normalize along columns or rows with any type of these norms.

The API doc is here.

What's interesting here is the fact that Faust.normalize returns a Faust object. Combined with slicing (that we will see soon), normalize is useful to write algorithms such as Orthonormal Matching Pursuit (OMP), which require matrices with L2-normalized columns, in a way that makes them able to leverage the acceleration offered by the FAµST API.

The normalization coded in C++ is memory optimized (it never builds the dense matrix F.toarray() to compute the norms of the columns/rows). In the same goal the factors composing the Faust object NF are not duplicated in memory from same factors F, they're used as is with an additional factor giving the appropriate scaling.

And as you see it works!

2.6 Faust Concatenation

You're probably aware of numpy arrays concatenation otherwise look this example.

I'm sure you guessed that likewise you can concatenate Faust objects. That's right!

The difference of the two concatenations is full of zeros, so of course it works!

As you noticed the Faust concatenation is stable, you give two Fausts and you get a Faust again. Besides, it's possible to concatenate an arbitrary number of Faust objects.

As an exercise, you can write the factors of the Faust F.concatenate(F), F being any Faust.

Hint: the block-diagonal matrices are around here.

2.7 Faust Indexing and Slicing

Sometimes you need to access the dense matrix corresponding to a Faust or an element of it (by the way, note that it's costly).

Let's access a Faust item:

Why is it costly? Because it essentially converts the Faust to its dense form (modulo some optimizations) to just access one item.

It's totally the same syntax as numpy but much slower so use it with care.

The more advanced slicing operation uses also the same syntax as numpy:

Here again, the result is another Faust. But this is not a full copy, it makes profit of memory views implemented behind in C++. Solely the first and last factors of the sliced Faust are new in memory, the others are just referenced from the initial Faust F. So use it with no worry for a Faust with a lot of factors!

The numpy's fancy indexing has also been implemented in the FAµST C++ core, let's try it:

Again, it's a Faust but is it really working? Verify!

Yes it is!

As a complement about Faust indexing, you can take a look at this FAQ entry. It clarifies the use of indexing form F[I, J] (I and J being lists of indices). Actually, this indexing method is not implemented in pyfaust, the FAQ entry explains why and suggests another form of indexing that is easy to confuse with.


This is the notebook's end, you have now a global view of what the Faust class is able and what kind of high-level algorithms it is ready for. You might be interested to read other notebooks, just go back to the page where you got this one.

Note: this notebook was executed using the following pyfaust version: