
3.41.0

1. About matfaust:
1.1. Why did I get a filenotfound error when running demos or examples?
1.2. How to fix mex not found error: "Undefined function or variable 'mexFaustReal'"?
1.3. How to launch the demos with matfaust?
1.4. How can I launch the integrated unit tests of matfaust?
1.5. How to run the PALM4MSA algorithm in a stepbystep fashion?
1.6. Why this no_normalization parameter for PALM4MSA and hierarchical factorization?
1.7. How to deal with single precision sparse matrices in Matlab?
2. About pyfaust:
2.1. How can I launch the integrated unit tests of pyfaust?
2.2. How to launch the demos with pyfaust?
2.3. How to run the PALM4MSA algorithm in a stepbystep fashion?
2.4. Why do I get the error 'Library not loaded: @rpath/libomp.dylib' when I use pyfaust on Mac OS X and How to fix it?
2.5. Why this no_normalization parameter for PALM4MSA and hierarchical factorization?
2.6. How to fix the Segmentation Fault issue when using Torch with pyfaust on Mac OS X?
2.7 Why the Faust F[I, J] indexing operation is not implemented in pyfaust?
2.8 How to fix conda pyfaust install error about glibc
2.9 Installing pyfaust with conda why did I obtained a SafetyError on libomp?
2.10. Why do I get the error 'OMP: Error #15: Initializing libomp.dylib, but found libomp.dylib already initialized.' when I use pyfaust on Mac OS X and How to fix it?
3. About CUDA (for GPU FAµST API support)
3.1 Where can I find the CUDA 12 / 11 installer for my system?
3.2 How do I need to configure the CUDA 12 / 11 installer on Windows 10?
For example, running this matlab command if quickstart.mat file is not found you'll get the following error:
$ matlab nojvm nodisplay r "import matfaust.demo.quickstart; quickstart.quick_start()" < M A T L A B (R) > Copyright 19842017 The MathWorks, Inc. R2017a (9.2.0.556344) 64bit (glnxa64) March 27, 2017 For online documentation, see http://www.mathworks.com/support For product information, visit www.mathworks.com. Error using load Unable to read file 'faust_quick_start.mat'. No such file or directory. Error in matfaust.Faust (line 226) load(filename); Error in matfaust.demo.quickstart.quick_start (line 19) A=Faust('faust_quick_start.mat')
The same kind of error might happen also with pyfaust, the python wrapper, which depends on the same data.
Normally, at installation stage the FAµST externalized data (basically a bunch of matlab .mat files) is downloaded from a remote web server and unarchived in FAµST installation path. Nevertheless, it might not work properly for any reason (e.g. network issue happening at installation), so here are two ways to download the data manually.
Just reinstall FAµST! It will relaunch the data download if your network connection is properly enabled. If it doesn't work, repeat the operation after having deleted the data folder located in FAµST installation path or your user main directory (e.g. $HOME/pyfaust_data on Linux and Mac OS X).
This is assumed that you installed the pyfaust wrapper from a pip package or through one of the installers/packages.
Here is an example of commands you can type to download the data (it's on Linux bash but similar if not totally the same commands apply to other systems) :
First, find where pyfaust is installed (we use python 3):
$ python c "import pyfaust; print(pyfaust.__file__)" /home/faq/test_pyfaust_venv/lib64/python3.11/sitepackages/pyfaust/__init__.py
Second, run these commands to download and uncompress the data:
$ rm Rf ~/pyfaust_data && python test_pyfaust_venv/lib/python3.11/sitepackages/pyfaust/datadl.py ~/pyfaust_data Downloading FAuST data : 100 % ====== data downloaded: /tmp/faust_data.zip Uncompressing zip archive to /home/faq/pyfaust_data
# get the matlab wrapper path: $ matlab nojvm r "import matfaust.Faust;which Faust" /opt/local/faust/matlab/+matfaust/@Faust/Faust.m % matfaust.Faust constructor # the result command indicates we have to download the data in the matfaust wrapper path here: /opt/local/faust/matlab/data $ DATA_DEST=/opt/local/faust/matlab/data $ sudo rm Rf $DATA_DEST; mkdir $DATA_DEST; sudo python /opt/local/faust/python/pyfaust/datadl.py $DATA_DEST Downloading FAµST data: 100 % ====== data downloaded: /tmp/faust_data.zip Uncompressing zip archive to /opt/local/faust/matlab/data
If finally you still don't manage to retrieve the data, please write an email with all the details (at least the version of FAµST, the installer used and of course your system).
If something went wrong, for example at install stage, it is possible that Matlab doesn't find FAµST (in particular the mex or even matlab files .m
).
In this case, an error similar to the two examples below might be raised:
>> import matfaust.rand Error using import Import argument 'matfaust.rand' cannot be found or cannot be imported. >> import matfaust.rand >> rand(5,4) Undefined function or variable 'mexFaustReal'. Error in matfaust.rand (line 320) core_obj = mexFaustReal('rand', num_rows, num_cols, fac_type, min_num_factors, max_num_factors, min_dim_size, max_dim_size, density, per_row);
To fix this issue, you have to update the Matlab path manually, however a script (named setup_FAUST.m) is here to help.
% go in the matlab installation path : /opt/local/faust/matlab on Linux and MacOS X, C:\Program Files\Faust\matlab on Windows >> cd /path/to/faust/matlab % then run the setup_FAUST.m script >> setup_FAUST Welcome to the Matlab wrapper of the FAuST C++ toolbox.FAuST root directory is /path/to/faust/matlab/ Adding path /path/to/faust/matlab/ and subdirectories to matlab path To get started with the FAuST Toolbox : launch quick_start or run_all_demo.m
For further details, about how to save this path once and for all look at the install guide.
You might run all the demos at once or one by one. In the former case you should run this matlab code:
>> import matfaust.demo.runall >> runall()
Note: the raw result are then in output files located in the directory ./output
.
To launch the demo one by one, you can pick the command of interest below:
For the BSL demo:
>> import matfaust.demo.bsl.* >> BSL() >> Fig_BSL()
For the DFT demo:
>> import matfaust.demo.fft.speed_up_fourier >> speed_up_fourier
For the Hadamard demo:
>> import matfaust.demo.hadamard >> hadamard.speed_up_hadamard()
For the runtime comparison demo:
>> import matfaust.demo.runtimecmp >> runtimecmp.runtime_comparison >> runtimecmp.Fig_runtime_comparison
And for the last and simplest demo, the quickstart script:
>> import matfaust.demo.quickstart >> quickstart.quick_start() >> quickstart.factorize_matrix() >> quickstart.construct_Faust_from_factors()
TODO (in the meantime you can read the pyfaust entry)
TODO (in the meantime you can read the pyfaust entry)
TODO (in the meantime you can read the pyfaust entry)
As you maybe know, Matlab doesn't support single (precision) sparse matrices, it does only support double sparse matrices. If you create a double sparse matrix and try to convert it to single class, here is the result:
>> M = sprand(10, 10, .2); >> sM = single(M) Error using single Attempt to convert to unimplemented sparse type
However since matfaust supports single/float sparse matrices, you might wonder how to use such a class of matrices in matlab. The solution is straightforward, you encapsulate it in a Faust as follows:
>> F = matfaust.Faust(M) F = Faust size 10x10, density 0.19, nnz_sum 19, 1 factor(s):  FACTOR 0 (double) SPARSE, size 10x10, density 0.19, nnz 19 >> class(F) ans = 'double' >> sF = single(F) sF = Faust size 10x10, density 0.19, nnz_sum 19, 1 factor(s):  FACTOR 0 (float) SPARSE, size 10x10, density 0.19, nnz 19 >> class(sF) ans = 'single'
sF is now your single sparse matrix encapsulated in a Faust. You can easily proceed to any operation on it as for any Faust. All will be computed as respect to the single/float precision (which is of course less expensive than double precision).
That's actually quite simple, if pyfaust is properly installed, you just have to type this command in a terminal:
python c "import pyfaust.tests; pyfaust.tests.run_tests('cpu', 'real')" # in some cases, it could be python3 or python3* instead of python
Note: in the above test, the Faust class is tested for the CPU C++ backend and real Faust objects (i.e. dtype == np.float). If you want to test GPU complex Faust, just replace the arguments as this: run_tests('gpu', 'complex')
. Of course you need a properly installed NVIDIA GPU.
You might run all the demos at once or one by one. In the former case you should run this python code:
>>> from pyfaust.demo import runall >>> runall()
Note: the raw result are then in output files located in the directory pyfaust.demo.DEFT_RESULTS_DIR
.
To generate the figures that go by default into pyfaust.demo.DEFT_FIG_DIR
, you have to type in a python terminal or insert in a python script the following instructions:
>>> from pyfaust.demo import allfigs >>> allfigs()
To launch the demo one by one, you can pick the command of interest below:
For the BSL demo:
>>> from pyfaust.demo import bsl >>> bsl.run() # runs the experiment >>> bsl.fig() # generates the figures
For the DFT demo:
>>> from pyfaust.demo import fft >>> fft.speed_up_fourier() >>> fft.fig_speedup_fourier()
For the Hadamard demo:
>>> from pyfaust.demo import hadamard >>> hadamard.run_fact() >>> hadamard.run_speedup_hadamard() >>> hadamard.run_norm_hadamard() >>> hadamard.figs()
For the runtime comparison demo:
>>> from pyfaust.demo import runtimecmp >>> runtimecmp.run() >>> runtimecmp.fig()
And for the last and simplest demo, the quickstart script:
>>> from pyfaust.demo import quickstart >>> quickstart.run()
Although the verbose mode of the PALM4MSA implementation allows displaying some info, it might be useful in order to analyze the algorithm (e.g. build the loss function or check the matrix supports evolution), to be able to run just one iteration at a time, get all the Faust layers, the scale factor (lambda), do whatever one needs with them, and then continue to the next iteration.
It implies to reinitialize the next iteration in the same state it was at the end of the past iteration. The script stepbystep_palm4msa.py shows how to do that for a matrix factorization in two factors but it is not much different with a greater number of factors. On the script end PALM4MSA is performed all iterations at once in order to verify the stepbystep execution was consistent.
Below is an example of output you should obtain running the script (into which you can see that the atonce and iterationbyiteration executions match perfectly):
python3 stepbystep_palm4msa.py  tail 3 Relative error when running all PALM4MSA iterations at once: 0.2978799226115671 Last relative error obtained when running PALM4MSA iterationbyiteration: 0.2978799226115671 Relative error comparing the final Fausts obtained either by in stepbystep PALM4MSA versus alliterationsatonce PALM4MSA: 2.1117031467008879e16
Well pyfaust has many dependencies, certain of them are built within the pyfaust shared library, other are linked dynamically as is OpenMP. So to use pyfaust you need to install OpenMP on Mac OS X. We advise to install it through MacPorts because pyfaust is compiled and linked using the MacPorts provided version of OpenMP.
To install MacPorts go on their website and download the pkg, that's pretty straightforward (take care to take the version that matches your Mac OS X version).
Once Macports is installed, launch a terminal and type this command:
sudo port install libomp sudo port f activate libomp
Note that starting from pyfaust 3.11.1 the libomp library is embedded in the pyfaust package, so you shouldn't meet this issue again for this version and the nexts.
Well, you must know that in PALM4MSA updating a factor consists to first applying the gradient on it and then passing the resulting matrix through a proximity operator to enforce the structure/sparsity. After this two stages, the prox output matrix is often normalized.
Experiments have shown that it is totally possible to fail the normalization stage when the norm of the matrix is too high to be a number (or at least to be encoded in a floating point data type), it is in fact infinite. So you might end up with a zero matrix after normalization. In other close cases it can give NaN as matrix elements or 2norms.
Hence disabling the normalization can help to avoid those overflows. That's why this option has been added to the parameters in both pyfaust and matfaust.
For example, running the hierarchical factorization algorithm on a Hadamard matrix of numpy dtype float32 and size 512x512 is a case where this kind of error occurs.
Below I reproduce the code firstly with the normalization enabled and the error it produces, secondly without normalization to show that it fixes the issue.
Note that this new parameter is limited to the 2020 implementations of PALM4MSA and the hierarchical algorithm.
Error case:
from pyfaust import wht from pyfaust.fact import hierarchical from time import time import numpy as np dim = 512 H = wht(dim, dtype='float32') M = H.toarray() F = hierarchical(M, 'hadamard', on_gpu=False, backend=2020) print("error:", (FH).norm()/H.norm()) Output: Faust::hierarchical: 1/8 Faust::hierarchical: 2/8 Faust::hierarchical: 3/8 Faust::hierarchical: 4/8 Faust::hierarchical: 5/8 Faust::hierarchical: 6/8 Faust::hierarchical: 7/8 Faust::hierarchical: 8/8 terminate called after throwing an instance of 'std::runtime_error' what(): Error in update_lambda: S (the Faust) contains nan elements in at least one of its matrices, can't compute lambda. Aborted
Fixed case:
from pyfaust import wht from pyfaust.fact import hierarchical from pyfaust.factparams import ParamsHierarchicalSquareMat from time import time import numpy as np dim = 512 H = wht(dim, dtype='float32') M = H.toarray() p = ParamsHierarchicalSquareMat.createParams(M, 'hadamard') p.no_normalization = True F = hierarchical(M, p, on_gpu=False, backend=2020) print("error:", (FH).norm()/H.norm()) Output: Faust::hierarchical: 1/8 Faust::hierarchical: 2/8 Faust::hierarchical: 3/8 Faust::hierarchical: 4/8 Faust::hierarchical: 5/8 Faust::hierarchical: 6/8 Faust::hierarchical: 7/8 Faust::hierarchical: 8/8 error: 3.3585222552295126e05
A conflict issue has been identified between pyfaust and pytorch on Mac OS X. It is most likely due to different versions of OpenMP loaded on the fly after package imports. The reason has not been investigated properly yet but a workaround is easy to set in place for any user. The first extract of code below shows how to reproduce the error, which is in fact a Segmentation Fault, then a second block of code shows how to workaround this error. In brief, importing pyfaust first will do the fix!
Reproducing the error:
(py_venv) ciosx:~ ci$ ipython Python 3.9.12 (main, Mar 25 2022, 00:46:17) Type 'copyright', 'credits' or 'license' for more information IPython 8.3.0  An enhanced Interactive Python. Type '?' for help. In [1]: import torch dyld: Registered code signature for /Users/ci/py_venv/lib/python3.9/sitepackages/torch/lib/libtorch_cpu.dylib In [2]: import pyfaust In [3]: from pyfaust.fact import butterfly In [4]: import torch In [5]: import numpy as np In [6]: F = butterfly(np.identity(1024), type='bbtree') In [7]: Segmentation fault: 11
Fixing the error by importing pyfaust first:
(py_venv) ciosx:~ ci$ ipython Python 3.9.12 (main, Mar 25 2022, 00:46:17) Type 'copyright', 'credits' or 'license' for more information IPython 8.3.0  An enhanced Interactive Python. Type '?' for help. In [1]: import pyfaust In [2]: from pyfaust.fact import butterfly In [3]: import torch dyld: Registered code signature for /Users/ci/py_venv/lib/python3.9/sitepackages/torch/lib/libtorch_cpu.dylib In [4]: import numpy as np In [5]: F = butterfly(np.identity(1024), type='bbtree') In [6]: F Out[6]: Faust size 1024x1024, density 0.0195312, nnz_sum 20480, 10 factor(s):  FACTOR 0 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 1 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 2 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 3 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 4 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 5 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 6 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 7 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 8 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048  FACTOR 9 (double) SPARSE, size 1024x1024, density 0.00195312, nnz 2048
You might have noticed that the F[I, J] indexing operation returns an error when F is a Faust and I, J are two lists of integers.
In [1]: import pyfaust as pf In [2]: F = pf.rand(10, 10) In [3]: I = [2, 3] In [4]: J = [5, 2] In [5]: F[I, J]  Exception Traceback (most recent call last) <ipythoninput58df82554232b> in <module> > 1 F[I, J] ~faust/wrapper/python/pyfaust/__init__.py in __getitem__(F, indices) 1675 out_indices[1] = indices[1] 1676 elif(isinstance(indices[1], list)): > 1677 if(isinstance(indices[0],list)): raise \ 1678 Exception("F[list1,list2] error: fancy indexing " 1679 "on both dimensions is not implemented " Exception: F[list1,list2] error: fancy indexing on both dimensions is not implemented rather use F[list1][:,list2].
To understand why this error is happening you have to reconsider the semantics of this operation in numpy.
In [6]: M = F.toarray() In [7]: M[I,J] Out[7]: array([20.67240127, 2.94551195]) In [8]: M Out[8]: array([[17.90487833, 27.12379778, 5.39551904, 15.89955699, 11.00241609, 27.59432236, 19.78570417, 19.13069802, 27.37147328, 7.72290147], [14.80316587, 22.86821899, 4.84070096, 12.94334063, 9.4843501 , 22.66925664, 16.54982608, 16.66592289, 22.38216523, 5.96808832], [13.24941355, 20.39376798, 3.95375811, 11.56781968, 8.08354195, 20.67240127, 13.889102 , 14.135856 , 20.8617173 , 5.68710658], [10.09700856, 15.55466917, 2.94551195, 9.14462562, 6.17430654, 15.76354282, 10.86911933, 10.8994256 , 16.05619013, 4.38006515], [15.59341136, 23.99711993, 4.89333899, 13.72176671, 9.80085375, 24.1952296 , 17.11168856, 16.70887072, 23.81469128, 6.59423473], [ 8.84679453, 13.28811366, 2.3878287 , 7.82124871, 4.96229759, 13.57289693, 9.00585864, 9.42189303, 14.1050622 , 3.91864366], [ 9.21812565, 13.75898807, 2.9382162 , 7.71299471, 5.79531908, 13.97765518, 10.37499817, 10.17103417, 13.6208411 , 3.81052436], [11.7407251 , 17.63200825, 3.56013283, 10.10756297, 7.0794955 , 17.79069073, 12.68163621, 12.1540547 , 17.47823264, 5.15775148], [13.65486363, 20.6882994 , 4.4918682 , 11.22321171, 8.64807158, 20.51835829, 14.92849385, 15.34545168, 20.193901 , 5.47177391], [12.20464853, 18.61673079, 3.56754312, 10.82609474, 7.10124917, 18.83137495, 12.5277266 , 12.34828219, 18.76995746, 5.38920997]])
As you can see in numpy the operation of indexing the array M with the expression M[I, J] implies first a broadcasting of the two lists I and J together and second to return the array [M[I[0], J[0]], M[I[1], J[1]], ..., M[I[1], J[1]]]. Obviously, doing the same operation with a Faust would need to compute the full array (Faust.toarray()) which is an operation to avoid to spare calculation time. That's why this operation is not implemented in pyfaust, but you can write it very quickly if needed (it is as simple as F.toarray()[I,J]).
So now, let's explain why the error suggests to use rather F[I][:, J] instead of F[I, J] whereas they are not the same operation at all. The reason is because of Matlab! In Matlab M(I, J), with M a matrix, doesn't mean the same thing as in Python. It actually means to return the submatrix of M composed of the rows of M indexed in I (in the same order) and to keep in those rows only the entries whose the columns are indexed in J (in the same order again). More formally if subM = M(I, J) then subM is a matrix of size (N = numel(I)) x (P = numel(J)) such that for every pair (i,j) in {1, ..., N} x {1, ..., M}, subM(i, j) == M(I(i), J(j)). Back to numpy, you can write this Matlab way of indexing with the simple expression F[I][: J] which is totally feasible on a Faust, without having to compute the full array. Hence the error suggests to do that in case the user would confuse the semantics of Matlab (Faustcompatible) and Python (not Faustcompatible). In short, that's just a hint for using a supported operation which is near from an unsupported operation.
Trying a to install pyfaust in a conda environment, an error about glibc might happen. A message similar to following one might pop up after a conda install c pyfaust pyfaust
.
Take care to add the condaforge channel to your environment before installing pyfaust.
In this goal, please follow the install guide here.
In order to verify you don't already have condaforge set in your channels, use the following command:
conda config show channels
If it returns a list containing 'condaforge' as in the example of output below, then you're all set.
The scenario is as follows :
After a conda c pyfaust install pyfaust
you might end up on the error copied below, which mentions a SafetyError
on libomp.so
. First, there is no need to pay more attention to this message because this is in fact only a warning. It doesn't prevent the installation of pyfaust. Secondly, the reason of this error is a bug of condabuild that has led us to a workaround that necessitates to modify libomp after conda package building. The modification implies a difference of size in the package manifest relatively to the real size of libomp, hence the error.
For further details about installing pyfaust with conda, please refer to this page.
An example of the error message:
Installing pyfaust with conda you might end up on the error reproduced below. It is about OpenMP that came with pyfaust but also installed through numpy which uses in this case MKL (the Intel library). The error complains about the two versions in conflict at loading time.
To workaround this issue we propose to install nomkl wich removes the MKL backend:
FAµST wrappers GPU API needs CUDA 12 (or 11) to work. To install this toolkit you need to download the appropriate archive here for CUDA 12 (or here for CUDA 11). Select your system and architecture then download the package/installer.
Note: it's recommended to install the most recent version of CUDA 12 (instead of CUDA 11 or older CUDA 12).
After downloading the installer through one of the links given in 3.1, launch it to start the install. You shall see several panels during the process, they are listed below with the options you should select.
Of course the CUDA 12 / 11 will work only if you have a NVIDIA CUDA compatible card fully installed with its appropriate driver compatible to CUDA 12 / 11 (note that the CUDA installer proposes to install the driver).
Panel 1: System Check (nothing special to do).
Panel 2: License Agreement (you must accept to continue).
Panel 3: Installation Options: choose "Express (Recommended)".
Of course, not all components provided by the CUDA installer are really necessary to run FAµST on GPU, but following the express install is the simplest way to make FAµST work with CUDA properly.